Eight tips to relieve those Postgres headaches

Welcome to this practical, non-exhaustive guide to some of the common issues you’ll encounter while using Postgres. We’ll go beyond the basic advice of adding indices to your queries using sequential scans. There are also a few extra tips for those using the AWS RDS service. Enjoy!

Power up your tooling

#1 Fancy EXPLAINs

There’s a good chance you’ve used EXPLAIN or perhaps even EXPLAIN ANALYZE to understand the source of your frustration, or perhaps the slowness of your database queries.

ANALYZE is not the only option that EXPLAIN accepts. There’s also COSTSVERBOSEFORMATBUFFERS, and TIMINGSANALYZE, which as you might already know is by far the most useful. It shows the actual query execution instead of just the planner estimates. For the purposes of this tip, it’s not really important what each of these means. Docs can help with that.

If we leverage FORMAT JSON, we can use a great visualization tool called PEV (Postgres Explain Visualizer) by Alex Tatiyants. You can try it online here. It loads the JSON outputted instead of the standard text representation and draws a nifty graph showing the dependencies, with the hot spots highlighted in red.

When debugging queries, I already have muscle memory for typing EXPLAIN (ANALYZE, COSTS, BUFFERS, VERBOSE, FORMAT JSON) SELECT 

#2: pg_stat_statements

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;. Run this on your database now. Seriously, I don’t fully understand why it’s not enabled by default when it’s shipped with Postgres. If I had to build an RPG party in a Postgres-themed world, this helper would be my first pick.

EXPLAIN helps you understand a query you already know about. pg_stat_statements tracks the performance of all your queries and groups the metrics by the query itself. It tracks the min/max/mean executions time, IO performance, and the total number of rows returned.

If you’re looking to reduce the CPU/IO load on your database, this extension will quickly tell you where to look for the biggest marginal improvements. IO heavy queries can slip under the radar and cause unpredictable peaks that can take down a database that’s otherwise operating within a healthy load range.

Ordering randomly is a straightforward way to see IO spikes in monitoring. The query will randomly hit many uncached parts of the table and force a load from the disk.

#3: pg_stat_statements on AWS RDS

The extension is also available on AWS RDS instances, but you can opt into a better feature called Performance Insights. As far as I can tell from the UI and API, it uses the same extension under the hood, but it adds something extra-valuable: the development of the metrics tracked by pg_stat_statements over time.

The typical scenario is that your database suffered a spike a few hours ago, but the queries causing it are long gone. You find yourself waiting for the next occurrence to get some examples for your EXPLAIN sessions.

With Performance Insights, you can retrospectively see which query was the culprit at 1-minute intervals, in terms of CPU usage, data transferred from disk/to the client, and locking.

#4: Analyzing your logs: pganalyze or pgbadger

These two tools are similar to the previous extension in the sense that they analyze your database’s performance retrospectively. They both use the native database log file, parse it, and extract information about connections/disconnections, locks, long-running queries, errors, checkpoints, and a lot more.

pganalyze is a paid, hosted service, whereas pgBadger is a free, self-hosted alternative. The simplest setup I know for pgBadger on AWS RDS is to use a script like sportngin/rds-pgbadger, which uses the AWS API to download the log files automatically and output the HTML. You can set up the script to run on an EC2 instance in a daily cron and output a separate HTML file for each day, which you can then view.

They require you to set up the log format string appropriately and set a minimum threshold for query duration to be logged. The downside of that is if you have a large number of fast queries, your logs can fill up too quickly. Without the visibility of those small-but-frequent queries, your statistics might be skewed.

Common root causes and useful configurations

#5: The database is locking up because of idle transactions

Unintentionally, you might introduce the following pattern in your code. Similar things happen with Java while using an @Transactional annotation around a method and then calling an external API or other long-running code.

There are two potentially long-running calls above: the external API call to get the current price and the second to send a push notification to your mobile users. It’s all wrapped in a transactional call you might be needed for some other writes to be atomic.

The first call is fine. But in the second call, the stock row stays locked in the database and only gets released when the second API call finishes. This can block many operations that need to update the same stock row or alter that table.

In the pg_stat_activity output (SELECT * FROM pg_stat_activity), you will see these queries as state = 'idle in transaction'. The connection is not doing anything, but it has opened a transaction with BEGIN that it hasn’t closed yet. Seeing this state in the list of currently executed queries is a big smell and can be the root cause of a lot of unpredictable spikes in query times.

The solution is simply to place your long-running calls out of your transactions.

#6 Partial indices

Let’s take a look at the following table, which holds issues for an issue tracker. The data is split into two partitions using an archived column (this could be deleted if you’re using a soft deletion library).

You’re running the following query, and EXPLAIN tells you it’s slow because it’s triggering a sequential scan. Your first instinct is to add an index, maybe CREATE INDEX ON issues (archived, status) or perhaps the other way around. The new query uses an index scan and is much faster, but you can do better.

We could argue that data not being queried at all should be moved elsewhere, but we all know that’s not the case as often as it should be. And it’s not easy to do either.

By including the archived column in the index, you’re unnecessarily indexing all the rows in the part of the table that takes up 80% of it. 99% of your queries don’t touch it, and certainly not the ones that are looking for open issues.

A boolean column also has very low cardinality (2 possible values), so it doesn’t help the database engine too much in locating the rows on the disk. It will still have to sift through a large part of the table.

A better index definition is partial indices: CREATE INDEX ON issues (status) WHERE archived = false. The index will only exist on top of the smaller part of the table. It will be much smaller on disk, cheaper to maintain, and cheaper to vacuum. It will automatically be used when a query contains the same WHERE condition as is in the index definition.

In general, anytime you’re adding a boolean or other low-cardinality column to an index, think if a partial index wouldn’t be a more efficient solution.

#7 Index statistics

The Index Maintenance page from the official Postgres wiki is another top bookmark in my Postgres folder. In particular, the Index size/usage statistics and Duplicate indexes queries can give you a quick health report of your index setup.

The two queries have helped me uncover several things:

  • Unused indices (num_rows being very low or zero) ⇒ simply delete them and reduce the overhead of UPDATING rows on the given tables.
  • Duplicate indices, where one of them completely covers the other, are uselessly maintained and adding overhead on writes.
  • Queries with a low tuples_fetched / tuples_read ratio. Often, this means that even though queries are executed through an index scan, the result set still has to be walked through and a big part of it filtered away (a second index column could help).

#8 Cons of raising max_connections

work_mem is a key configuration that says how much memory a connection has to execute a query. If the value is too low and a big dataset is being processed (aggregated, sorted), the connection will have to use the disk, leading to much slower execution times. You can see how often this happens by enabling log_temp_files (and see tip #4 for how to keep track).

shared_buffers says how much memory is reserved for a “cache” used by all connections. Frequently accessed rows will be stored here. If the value is too low, queries will be slower because they must be fetched from the disk.

The basic equation is that shared_buffers + max_connections * work_mem < server RAM. The goal is to maximize both work_mem and shared_buffers. There are plenty of online resources about how to set these values.

The point I wanted to highlight is how the max connections setting comes into play. Often, web apps with inefficient DB pooling or too much scaling can quickly increase the number of connections, and the developer will raise the setting, allowing more connections. But in order to avoid OOMs, work_mem must be reduced to keep the invariant in the equation above. This, however, will lead to more disk usage and overall higher query times.

In case you find yourself increasing the max_connection repeatedly, it might be time to introduce a pooling proxy. The most established software for this is PgBouncer. But if you’re using AWS, you might want to look into their new RDS Proxy service.

You might also like

Productboard expanding San Francisco engineering presence
Life at Productboard

Productboard expanding San Francisco engineering presence

Jiri Necas
Jiri Necas
Refactoring Productboard’s design system to the next level
Life at Productboard

Refactoring Productboard’s design system to the next level

Jonathan Atkins
Jonathan Atkins
The Role of Growth Engineering at Productboard: Significance, key skills, and responsibilities
Life at Productboard

The Role of Growth Engineering at Productboard: Significance, key skills, and responsibilities

Stuart Cavill
Stuart Cavill