Monitor PostgreSQL performance with pg_stat_statements extension for query analysis and optimization

Intermediate 25 min Apr 18, 2026 167 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up pg_stat_statements extension to collect query statistics, analyze slow queries, and optimize PostgreSQL database performance with automated monitoring alerts.

Prerequisites

  • PostgreSQL 12 or newer installed
  • sudo access
  • Basic SQL knowledge

What this solves

The pg_stat_statements extension tracks execution statistics for all SQL statements in PostgreSQL, giving you detailed insights into query performance, resource usage, and bottlenecks. This tutorial shows you how to enable the extension, configure optimal settings, analyze query data, and set up automated monitoring with Grafana dashboards for production database optimization.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you have the latest PostgreSQL packages available.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install PostgreSQL contrib package

The pg_stat_statements extension is included in the postgresql-contrib package. Install it for your PostgreSQL version.

sudo apt install -y postgresql-contrib
sudo dnf install -y postgresql-contrib

Configure shared_preload_libraries

Add pg_stat_statements to PostgreSQL's shared_preload_libraries setting. This requires a database restart to take effect.

sudo -u postgres psql -c "ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';"
sudo systemctl restart postgresql

Create pg_stat_statements extension

Connect to your database and create the extension. Run this on each database where you want to collect statistics.

sudo -u postgres psql -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"
sudo -u postgres psql -c "SELECT * FROM pg_stat_statements LIMIT 1;"

Configure pg_stat_statements parameters

Optimize the extension settings for better query tracking and performance analysis. These settings control how many statements to track and what data to collect.

sudo -u postgres psql -c "ALTER SYSTEM SET pg_stat_statements.max = 10000;"
sudo -u postgres psql -c "ALTER SYSTEM SET pg_stat_statements.track = 'all';"
sudo -u postgres psql -c "ALTER SYSTEM SET pg_stat_statements.track_utility = 'on';"
sudo -u postgres psql -c "ALTER SYSTEM SET pg_stat_statements.save = 'on';"
sudo systemctl reload postgresql

Query analysis and optimization

Analyze top slow queries

Use pg_stat_statements to identify the slowest queries by total execution time. This query shows the top 10 queries consuming the most database time.

sudo -u postgres psql -c "
SELECT 
  round((100 * total_exec_time / sum(total_exec_time) OVER())::numeric, 2) AS percent_total,
  round(total_exec_time::numeric, 2) AS total_time,
  calls,
  round(mean_exec_time::numeric, 2) AS mean_time,
  substring(query, 1, 80) AS query_snippet
FROM pg_stat_statements 
ORDER BY total_exec_time DESC 
LIMIT 10;
"

Find queries with high I/O usage

Identify queries that perform the most disk reads and writes. High I/O queries often indicate missing indexes or inefficient query patterns.

sudo -u postgres psql -c "
SELECT 
  calls,
  shared_blks_read + shared_blks_hit AS total_blocks,
  shared_blks_read,
  shared_blks_hit,
  round((shared_blks_read * 100.0 / (shared_blks_read + shared_blks_hit))::numeric, 2) AS miss_ratio,
  substring(query, 1, 80) AS query_snippet
FROM pg_stat_statements 
WHERE shared_blks_read + shared_blks_hit > 1000
ORDER BY shared_blks_read DESC 
LIMIT 10;
"

Identify queries with high variance

Find queries with inconsistent execution times. High standard deviation indicates queries that sometimes run fast and sometimes slow.

sudo -u postgres psql -c "
SELECT 
  calls,
  round(mean_exec_time::numeric, 2) AS mean_time,
  round(stddev_exec_time::numeric, 2) AS stddev_time,
  round((stddev_exec_time / mean_exec_time)::numeric, 2) AS variance_ratio,
  substring(query, 1, 80) AS query_snippet
FROM pg_stat_statements 
WHERE calls > 100 AND stddev_exec_time > 0
ORDER BY stddev_exec_time DESC 
LIMIT 10;
"

Reset statistics for fresh analysis

Clear pg_stat_statements data to start collecting fresh statistics. This is useful when you want to analyze query patterns over a specific time period.

sudo -u postgres psql -c "SELECT pg_stat_statements_reset();"
Warning: This permanently deletes all collected query statistics. Only run this when you want to start fresh data collection.

Performance monitoring dashboards

Install PostgreSQL Prometheus exporter

Install postgres_exporter to collect PostgreSQL metrics including pg_stat_statements data for Grafana visualization.

wget https://github.com/prometheus-community/postgres_exporter/releases/download/v0.15.0/postgres_exporter-0.15.0.linux-amd64.tar.gz
tar xzf postgres_exporter-0.15.0.linux-amd64.tar.gz
sudo mv postgres_exporter-0.15.0.linux-amd64/postgres_exporter /usr/local/bin/
sudo chown root:root /usr/local/bin/postgres_exporter
sudo chmod 755 /usr/local/bin/postgres_exporter
wget https://github.com/prometheus-community/postgres_exporter/releases/download/v0.15.0/postgres_exporter-0.15.0.linux-amd64.tar.gz
tar xzf postgres_exporter-0.15.0.linux-amd64.tar.gz
sudo mv postgres_exporter-0.15.0.linux-amd64/postgres_exporter /usr/local/bin/
sudo chown root:root /usr/local/bin/postgres_exporter
sudo chmod 755 /usr/local/bin/postgres_exporter

Create monitoring user and configure access

Create a dedicated PostgreSQL user for metrics collection with minimal required privileges.

sudo -u postgres psql -c "
CREATE USER postgres_exporter WITH PASSWORD 'SecureMonitoringPass2024!';
GRANT CONNECT ON DATABASE postgres TO postgres_exporter;
GRANT SELECT ON pg_stat_database TO postgres_exporter;
GRANT SELECT ON pg_stat_statements TO postgres_exporter;
GRANT SELECT ON pg_stat_user_tables TO postgres_exporter;
GRANT SELECT ON pg_stat_user_indexes TO postgres_exporter;
"

Configure postgres_exporter service

Create a systemd service file for the PostgreSQL exporter with proper configuration and security settings.

[Unit]
Description=PostgreSQL Prometheus Exporter
After=network.target

[Service]
Type=simple
User=nobody
Group=nogroup
Environment=DATA_SOURCE_NAME=postgresql://postgres_exporter:SecureMonitoringPass2024!@localhost:5432/postgres?sslmode=disable
ExecStart=/usr/local/bin/postgres_exporter --extend.query-path=/etc/postgres_exporter/queries.yaml
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Create custom query configuration

Configure custom queries to collect pg_stat_statements metrics for detailed query performance monitoring.

sudo mkdir -p /etc/postgres_exporter
pg_stat_statements:
  query: |
    SELECT
      queryid,
      calls,
      total_exec_time,
      mean_exec_time,
      max_exec_time,
      min_exec_time,
      stddev_exec_time,
      rows,
      shared_blks_hit,
      shared_blks_read,
      shared_blks_dirtied,
      shared_blks_written,
      local_blks_hit,
      local_blks_read,
      local_blks_dirtied,
      local_blks_written,
      temp_blks_read,
      temp_blks_written,
      substring(query, 1, 100) as query_snippet
    FROM pg_stat_statements
    WHERE calls > 10
    ORDER BY total_exec_time DESC
    LIMIT 100;
  metrics:
    - queryid:
        usage: LABEL
        description: "Query ID"
    - query_snippet:
        usage: LABEL  
        description: "Query snippet"
    - calls:
        usage: COUNTER
        description: "Number of times executed"
    - total_exec_time:
        usage: COUNTER
        description: "Total time spent executing"
    - mean_exec_time:
        usage: GAUGE
        description: "Mean execution time"
    - max_exec_time:
        usage: GAUGE
        description: "Maximum execution time"
    - shared_blks_read:
        usage: COUNTER
        description: "Shared blocks read from disk"

Start PostgreSQL exporter service

Enable and start the postgres_exporter service to begin collecting metrics for Prometheus.

sudo systemctl daemon-reload
sudo systemctl enable --now postgres_exporter
sudo systemctl status postgres_exporter

Automated query optimization alerts

Configure Prometheus alert rules

Set up alert rules to notify you when query performance degrades or resource usage spikes occur.

groups:
  • name: postgresql_performance
rules: - alert: PostgreSQLSlowQueries expr: increase(pg_stat_statements_mean_exec_time[5m]) > 1000 for: 2m labels: severity: warning annotations: summary: "PostgreSQL slow query detected" description: "Query {{ $labels.queryid }} has mean execution time above 1000ms" - alert: PostgreSQLHighIOWait expr: rate(pg_stat_statements_shared_blks_read[5m]) > 1000 for: 3m labels: severity: warning annotations: summary: "PostgreSQL high I/O detected" description: "Query {{ $labels.queryid }} is reading {{ $value }} blocks per second from disk" - alert: PostgreSQLConnectionSpike expr: pg_stat_database_numbackends > 80 for: 1m labels: severity: critical annotations: summary: "PostgreSQL connection count high" description: "Database {{ $labels.datname }} has {{ $value }} active connections"

Create automated query analysis script

Set up a script to regularly analyze pg_stat_statements data and generate performance reports.

#!/bin/bash

PostgreSQL Query Analysis Script

REPORT_DIR="/var/log/postgresql-reports" DATE=$(date +%Y-%m-%d_%H-%M) mkdir -p $REPORT_DIR echo "PostgreSQL Query Performance Report - $DATE" > "$REPORT_DIR/query_report_$DATE.txt" echo "================================================" >> "$REPORT_DIR/query_report_$DATE.txt" echo "" >> "$REPORT_DIR/query_report_$DATE.txt" echo "TOP 10 SLOWEST QUERIES BY TOTAL TIME:" >> "$REPORT_DIR/query_report_$DATE.txt" sudo -u postgres psql -c " SELECT round((100 * total_exec_time / sum(total_exec_time) OVER())::numeric, 2) AS percent_total, round(total_exec_time::numeric, 2) AS total_time, calls, round(mean_exec_time::numeric, 2) AS mean_time, substring(query, 1, 120) AS query_snippet FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10; " >> "$REPORT_DIR/query_report_$DATE.txt" echo "" >> "$REPORT_DIR/query_report_$DATE.txt" echo "TOP 10 QUERIES BY I/O USAGE:" >> "$REPORT_DIR/query_report_$DATE.txt" sudo -u postgres psql -c " SELECT calls, shared_blks_read + shared_blks_hit AS total_blocks, shared_blks_read, round((shared_blks_read * 100.0 / (shared_blks_read + shared_blks_hit))::numeric, 2) AS miss_ratio, substring(query, 1, 120) AS query_snippet FROM pg_stat_statements WHERE shared_blks_read + shared_blks_hit > 100 ORDER BY shared_blks_read DESC LIMIT 10; " >> "$REPORT_DIR/query_report_$DATE.txt" echo "Query analysis report saved to: $REPORT_DIR/query_report_$DATE.txt"
sudo chmod 755 /usr/local/bin/analyze_queries.sh
sudo mkdir -p /var/log/postgresql-reports

Schedule automated analysis

Set up a cron job to run query analysis every hour and generate performance reports automatically.

sudo crontab -e

Add this line to run analysis every hour at minute 30:

30     /usr/local/bin/analyze_queries.sh

Verify your setup

Check that pg_stat_statements is collecting data and your monitoring is working correctly.

# Verify extension is loaded
sudo -u postgres psql -c "SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements';"

Check statistics collection

sudo -u postgres psql -c "SELECT count(*) FROM pg_stat_statements;"

Test postgres_exporter

curl -s http://localhost:9187/metrics | grep pg_stat_statements | head -5

Verify service status

sudo systemctl status postgres_exporter

Run manual analysis

sudo /usr/local/bin/analyze_queries.sh

Common issues

SymptomCauseFix
Extension not found after creationshared_preload_libraries not configuredAdd pg_stat_statements to shared_preload_libraries and restart PostgreSQL
No query statistics collectedExtension created in wrong databaseCreate extension in each database where you want statistics: CREATE EXTENSION pg_stat_statements;
postgres_exporter connection failedWrong connection string or permissionsCheck DATA_SOURCE_NAME in service file and verify user grants
High memory usage with pg_stat_statementspg_stat_statements.max set too highReduce pg_stat_statements.max value: ALTER SYSTEM SET pg_stat_statements.max = 5000;
Missing query text in statisticsQuery longer than track_activity_query_sizeIncrease track_activity_query_size: ALTER SYSTEM SET track_activity_query_size = '2048';
Statistics reset unexpectedlyDatabase restart without pg_stat_statements.saveEnable persistent statistics: ALTER SYSTEM SET pg_stat_statements.save = 'on';

Next steps

Running this in production?

Want this handled for you? Setting this up once is straightforward. Keeping it patched, monitored, backed up and performant across environments is the harder part. See how we run infrastructure like this for European teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle high availability infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.