Configure Grafana dashboards for TimescaleDB analytics with real-time metrics and alerting

Intermediate 45 min Apr 22, 2026
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Build comprehensive Grafana dashboards for TimescaleDB analytics with time-series visualizations, custom queries, and automated alerting for performance monitoring and data insights.

Prerequisites

  • TimescaleDB 2.0+ installed and running
  • Grafana 9.0+ with admin access
  • PostgreSQL client tools
  • Sample time-series data (optional)

What this solves

TimescaleDB generates massive amounts of time-series data, but raw metrics are useless without proper visualization and alerting. This tutorial shows you how to create production-grade Grafana dashboards that turn TimescaleDB data into actionable insights, with real-time charts, performance alerts, and automated notifications when your metrics hit critical thresholds.

Prerequisites

You need a running TimescaleDB instance and Grafana server. If you haven't set these up yet, check out our TimescaleDB installation guide and Grafana setup tutorial. You'll also need sample time-series data in your TimescaleDB database to visualize.

Step-by-step configuration

Install PostgreSQL driver for Grafana

Grafana needs the PostgreSQL plugin to connect to TimescaleDB. Install it if it's not already available.

sudo systemctl stop grafana-server
sudo grafana-cli plugins install postgres
sudo systemctl start grafana-server
sudo systemctl stop grafana-server
sudo grafana-cli plugins install postgres
sudo systemctl start grafana-server

Create TimescaleDB data source in Grafana

Open Grafana in your browser and navigate to Configuration > Data Sources. Click Add data source and select PostgreSQL.

Host: localhost:5432
Database: timescaledb_metrics
User: grafana_user
Password: your_secure_password
SSL Mode: require
TimescaleDB: enabled

Enable the TimescaleDB option in the data source settings to unlock time-series specific query optimizations.

Create a dedicated Grafana database user

Create a read-only user for Grafana to access your TimescaleDB data securely.

sudo -u postgres psql -d timescaledb_metrics
CREATE USER grafana_user WITH PASSWORD 'secure_grafana_password';
GRANT CONNECT ON DATABASE timescaledb_metrics TO grafana_user;
GRANT USAGE ON SCHEMA public TO grafana_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO grafana_user;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO grafana_user;
\q

Create sample metrics table

If you don't have existing time-series data, create a sample table to demonstrate dashboard functionality.

sudo -u postgres psql -d timescaledb_metrics
CREATE TABLE server_metrics (
  time TIMESTAMPTZ NOT NULL,
  hostname TEXT NOT NULL,
  cpu_usage DOUBLE PRECISION,
  memory_usage DOUBLE PRECISION,
  disk_usage DOUBLE PRECISION,
  network_in BIGINT,
  network_out BIGINT
);

SELECT create_hypertable('server_metrics', 'time');

-- Insert sample data for last 24 hours
INSERT INTO server_metrics 
SELECT 
  generate_series(NOW() - INTERVAL '24 hours', NOW(), INTERVAL '1 minute') AS time,
  'web-server-01' AS hostname,
  random() * 100 AS cpu_usage,
  random() * 100 AS memory_usage,
  random() * 100 AS disk_usage,
  (random() * 1000000)::BIGINT AS network_in,
  (random() * 1000000)::BIGINT AS network_out;

\q

Build your first TimescaleDB dashboard

Create a new dashboard in Grafana. Click the + icon, then Dashboard. Add your first panel with a time-series query.

SELECT 
  time AS "time",
  cpu_usage
FROM server_metrics 
WHERE hostname = 'web-server-01'
  AND time >= $__timeFrom()
  AND time <= $__timeTo()
ORDER BY time;

Set the panel type to Time series and configure the Y-axis to show percentage values for CPU usage.

Add memory usage visualization

Create a second panel for memory metrics with the same time range filtering.

SELECT 
  time AS "time",
  memory_usage as "Memory %"
FROM server_metrics 
WHERE hostname = 'web-server-01'
  AND time >= $__timeFrom()
  AND time <= $__timeTo()
ORDER BY time;

Create aggregated performance panel

Use TimescaleDB's time_bucket function to create aggregated views for better performance on large datasets.

SELECT 
  time_bucket('5 minutes', time) AS "time",
  hostname,
  avg(cpu_usage) as "Avg CPU",
  max(cpu_usage) as "Max CPU",
  avg(memory_usage) as "Avg Memory"
FROM server_metrics 
WHERE time >= $__timeFrom()
  AND time <= $__timeTo()
GROUP BY time_bucket('5 minutes', time), hostname
ORDER BY time;

This query groups data into 5-minute intervals and shows average and maximum values, reducing data points for better dashboard performance.

Add network traffic visualization

Create a panel that shows network I/O with proper byte formatting and rate calculations.

SELECT 
  time AS "time",
  network_in / 1024 / 1024 as "Inbound MB/s",
  network_out / 1024 / 1024 as "Outbound MB/s"
FROM server_metrics 
WHERE hostname = 'web-server-01'
  AND time >= $__timeFrom()
  AND time <= $__timeTo()
ORDER BY time;

Configure dashboard variables

Add dashboard variables to make your dashboard dynamic. Go to Dashboard Settings > Variables and create a hostname variable.

SELECT DISTINCT hostname FROM server_metrics ORDER BY hostname;

Now update your panel queries to use $hostname instead of hardcoded values:

SELECT 
  time AS "time",
  cpu_usage
FROM server_metrics 
WHERE hostname = '$hostname'
  AND time >= $__timeFrom()
  AND time <= $__timeTo()
ORDER BY time;

Set up alert rules

Navigate to Alerting > Alert Rules and create a new rule for high CPU usage. Configure the query and conditions.

SELECT 
  time,
  avg(cpu_usage) as cpu_avg
FROM server_metrics 
WHERE time >= now() - INTERVAL '5 minutes'
GROUP BY time_bucket('1 minute', time)
ORDER BY time;

Set the alert condition to trigger when CPU average is above 80% for more than 2 minutes.

Configure notification channels

Set up email notifications for your alerts. Go to Alerting > Contact points and add an email contact.

Name: team-alerts
Type: Email
Addresses: ops-team@example.com
Subject: [ALERT] {{ range .Alerts }}{{ .Annotations.summary }}{{ end }}
Message: Server {{ .CommonLabels.hostname }} has high resource usage

Create notification policies

Configure when and how alerts get sent. Go to Alerting > Notification policies and set up routing rules.

Note: Group alerts by hostname to avoid spam when multiple metrics trigger simultaneously.

Add dashboard annotations

Create annotations to mark important events on your time-series charts. This helps correlate performance changes with deployments or incidents.

SELECT 
  deployment_time as time,
  'Deployment' as title,
  version as text,
  'deployment' as tags
FROM deployment_log 
WHERE deployment_time >= $__timeFrom()
  AND deployment_time <= $__timeTo()
ORDER BY deployment_time;

Verify your setup

Test your dashboard configuration and alert rules to ensure everything works correctly.

# Check Grafana is running and accessible
curl -f http://localhost:3000/api/health

Verify TimescaleDB connection

sudo -u postgres psql -d timescaledb_metrics -c "SELECT count(*) FROM server_metrics;"

Test alert rule by inserting high CPU usage

sudo -u postgres psql -d timescaledb_metrics -c "INSERT INTO server_metrics VALUES (NOW(), 'web-server-01', 95.0, 45.0, 67.0, 5000000, 3000000);"

Open your Grafana dashboard and verify that all panels display data correctly. Check that your variables work and alerts trigger when conditions are met.

Performance optimization

Enable continuous aggregates

For large datasets, create continuous aggregates to pre-compute common queries and improve dashboard performance.

sudo -u postgres psql -d timescaledb_metrics
CREATE MATERIALIZED VIEW server_metrics_hourly
WITH (timescaledb.continuous) AS
SELECT 
  time_bucket('1 hour', time) AS hour,
  hostname,
  avg(cpu_usage) as avg_cpu,
  max(cpu_usage) as max_cpu,
  avg(memory_usage) as avg_memory,
  max(memory_usage) as max_memory
FROM server_metrics
GROUP BY hour, hostname;

SELECT add_continuous_aggregate_policy('server_metrics_hourly',
  start_offset => INTERVAL '2 hours',
  end_offset => INTERVAL '1 hour',
  schedule_interval => INTERVAL '1 hour');

\q

Update dashboard queries for better performance

Modify your dashboard queries to use continuous aggregates for longer time ranges.

SELECT 
  hour AS "time",
  avg_cpu as "Average CPU %",
  max_cpu as "Peak CPU %"
FROM server_metrics_hourly
WHERE hostname = '$hostname'
  AND hour >= $__timeFrom()
  AND hour <= $__timeTo()
ORDER BY hour;

Common issues

SymptomCauseFix
"No data" in panelsIncorrect time range or missing dataCheck $__timeFrom() and $__timeTo() functions in queries
Slow dashboard loadingToo many data pointsUse time_bucket() to aggregate data or create continuous aggregates
Alert not triggeringQuery returns no resultsTest alert query directly in TimescaleDB to verify data exists
Connection timeoutDatabase connection pool exhaustedIncrease max_connections in postgresql.conf or reduce concurrent queries
Variable not populatingQuery syntax errorTest variable query in Grafana's query editor

Advanced dashboard features

Add statistical analysis panels

Create panels that show percentiles and statistical distributions of your metrics.

SELECT 
  time_bucket('10 minutes', time) AS "time",
  percentile_cont(0.50) WITHIN GROUP (ORDER BY cpu_usage) as "P50 CPU",
  percentile_cont(0.95) WITHIN GROUP (ORDER BY cpu_usage) as "P95 CPU",
  percentile_cont(0.99) WITHIN GROUP (ORDER BY cpu_usage) as "P99 CPU"
FROM server_metrics
WHERE hostname = '$hostname'
  AND time >= $__timeFrom()
  AND time <= $__timeTo()
GROUP BY time_bucket('10 minutes', time)
ORDER BY time;

Create heat map visualizations

Use heat maps to show metric distribution over time with TimescaleDB's histogram functions.

SELECT 
  time_bucket('5 minutes', time) AS "time",
  width_bucket(cpu_usage, 0, 100, 20) * 5 as "le",
  count(*) as "count"
FROM server_metrics
WHERE time >= $__timeFrom()
  AND time <= $__timeTo()
GROUP BY time_bucket('5 minutes', time), width_bucket(cpu_usage, 0, 100, 20)
ORDER BY time, le;
Pro tip: Use TimescaleDB's built-in time-series functions like first(), last(), and time_weighted_average() for more accurate analytics in your Grafana panels.

For more advanced TimescaleDB setups, check out our guides on TimescaleDB high availability and TimescaleDB with Telegraf integration.

Next steps

Running this in production?

Need this managed? Setting this up once is straightforward. Keeping it patched, monitored, backed up and performant across environments is the harder part. See how we run infrastructure like this for European SaaS and analytics teams.

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.