PostgreSQL Tricks

PostgreSQL Activity

Do you want to see what your database is doing in the same way that you can see what your system is doing? Of course by that I mean you want top but for your database. That exists and it’s called pg_activity.

This fantastic tool will show you what queries are running, what queries are blocking, and what queries are blocked. Then you can dig into the query and kill it or whatever you need to do.

The output of pg_activity including all queries actively going against the database.

A hidden feature of this tool is that if you’re logged directly into your database server as the “root” user then you can see not only what queries are running but also how much CPU, memory, and disk I/O each query is using, in real time.

Extracting Database Schemas

It’s obvious to most people to use pg_dump or pg_dumpall to get a gigantic file with all of your schema information (and possibly schema data) in it. But what if you want to see a representation of your schema with one file per object. This way you can track schema changes with, I don’t know, source control. And then people on your team can submit pull requests to the schema extract to illustrate what changes they’re about to make to the database. Such a tool exists. It’s called pg_extractor and it was made by OmniTI. We run this tool nightly against our database to record changes to the schema. We have it do some extra work, too, to pull out any changes to pg_hba.conf, postgres.conf, and postgres.conf.auto. Our little script looks like this:

#!/bin/bash

pg_extractor --host db.infra.example.com --port 5432 --username postgres --dbname toolsdb --getall --orreplace --schemadir --delete --basedir sql --nodbnamedir --quiet

mkdir -p ./conf
scp -p db.infra.example.com:/usr/local/pgsql/data/postgresql.conf ./conf/
scp -p db.infra.example.com:/usr/local/pgsql/data/postgresql.auto.conf ./conf/
scp -p db.infra.example.com:/usr/local/pgsql/data/pg_hba.conf ./conf/

We wrap this with some code that will commit any changes to a git repository and push it to our git system and now we can see how our database has changed day to day.

Showing Object Ownership and Privileges

Source: https://gist.github.com/plockaby/5398814a7d160a9597d5516482bf85ce

It’s often handy for auditing to be able to quickly ascertain the ownership and privileges assigned to every object in a database. For example, say you want to ensure that everything is owned by the postgres user. Or say that you want to ensure that your read only user truly only has read only access to all objects in the database. How do you figure that out? Cobbled together from sources on the Internet and updated to support PostgreSQL 10 and higher, I’ve written a query that does just that in the GitHub Gist linked above.

So if you want to ensure that all of your objects are owned by the postgres user:

SELECT type, schema, name, owner
FROM show_object_ownership
WHERE owner != 'postgres'

And if you want to ensure that your read only user has only read access:

SELECT type, schema, name, objuser, privilege_pretty
FROM show_object_privileges
WHERE objuser IN ('ro-user', 'public') AND (
    -- no permissions on tablespaces
    (type = 'tablespace') OR
    -- only usage on schemas, sequences, and other things
    (type IN ('schema', 'sequence', 'domain', 'type', 'foreign data wrapper', 'foreign server', 'language') AND privilege_pretty != 'USAGE') OR  
    -- only execute on functions, windows, and aggregates
    (type IN ('function', 'window', 'aggregate') AND privilege_pretty != 'EXECUTE') OR
    -- only select on tables and views
    (type IN ('table', 'foreign table', 'view', 'materialized view') AND privilege_pretty != 'SELECT')
)

Generating Database Performance Graphs

Source: https://gist.github.com/plockaby/b4d818e545c98b0700ecc59ef84d02da

If you haven’t used Graphite (combined with Grafana) and collectd then you’re missing beautiful data out of your systems. It’s easy to shovel data from PostgreSQL into Graphite. First, there is one plugin that makes this much more informative: pg_buffercache.

The pg_buffercache extension tells you, in real time, what the shared buffers are doing in your database. If you are wondering whether you should make your shared buffer cache larger or if things are staying in the cache or being swapped out frequently then this plugin is what you need. For example, in our database at work you can see that over the last day our “heavily used” buffer has been at about 1.23GB in size. This is a good number as this buffer should be where data stays the longest. We can also see that, most recently, our shared buffer had very little unused space in it. We might benefit from having more memory allocated to our shared buffer but since data is staying in it a long time we might not. It’s worth investigating.

Buffer cache usage on our database server. The weird blip at midnight is when we were running backups so the cache was pretty static.

Once you have pg_buffercache set up and collectd set up, and you’ve installed the “postgresql” plugin into collectd, then add the configuration mentioned in the GitHub Gist linked above for collecting data from your cluster instance and start looking at your own pretty graphs containing the following information:

  • Total connection count.
  • Connection state counts for the cluster and each database in the cluster, separated by active and waiting. States include “active”, “idle”, “idle in transaction”, “idle in transaction (aborted)”, “fastpath”, and “disabled”.
  • Concurrent transactions for the cluster and the database separated by active and waiting.
  • Database size on disk.
  • Buffer cache size for each database.
  • Commit/Rollback ratios for the cluster and each database.
  • Individual database statistics including:
    • Number of backends
    • Number of commits and rollbacks
    • Block reads and writes
    • Index scans and updates
    • Sequential scans
    • Tuples returned, inserted, updated, and deleted
    • Deadlocks
    • Block read and write times
  • Disabled trigger count.
  • Transaction log count.
  • Hit ratio for the cluster and for each database.
  • Maximum query and transaction and wait length.
  • Replica delays.

This information when displayed in Grafana and combined with Grafana alerts can really help diagnose performance issues and predict outages.

Monitoring Bloat

Another handy extension to consider installing in your database is the pgstattuple extension. When combined with pg_bloat_check tool this extension will tell you what tables aren’t being vacuumed quickly enough and which indexes need to be rebuilt. Monitoring bloat, say, once per week or month is very helpful if you have tables that change a lot and vacuum won’t catch up with the number of changes made to them in any reasonable amount of time. Bloated tables take longer to read because there are a lot of data blocks that are being read and the entirely discarded.

A lot of people have written much better descriptions of bloat than I have and you probably don’t need to worry about this if your database is small and changes not very frequently. However, at my job we have a two database tables that update 100,000 rows every five minutes and 1.2 million rows every thirty minutes so we pay attention to it. (We also have a trigger on most of our updates so that if the data isn’t actually changing the update is ignored. This helps reduce some of the bloat growth by avoiding changing data that isn’t changing.)