Setup OpenResty monitoring with Prometheus and Grafana dashboards for performance analytics

Intermediate 45 min Apr 04, 2026 107 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Configure comprehensive monitoring for OpenResty web server using nginx-lua-prometheus module to collect metrics, Prometheus for storage, and Grafana for visualization with custom dashboards and alerting rules.

Prerequisites

  • Root or sudo access
  • OpenResty web server installed
  • Basic understanding of Lua scripting
  • 4GB+ RAM recommended

What this solves

OpenResty monitoring provides real-time insights into your web server performance, request patterns, and resource utilization. This tutorial shows you how to configure OpenResty with Lua-based Prometheus metrics collection, set up Prometheus to scrape these metrics, and create Grafana dashboards for comprehensive performance analytics and alerting.

Prerequisites

You'll need root access to your server and basic familiarity with OpenResty configuration. If you haven't installed OpenResty yet, follow our OpenResty installation guide first.

Step-by-step configuration

Update system packages

Start by updating your package manager to ensure you get the latest versions of required packages.

sudo apt update && sudo apt upgrade -y
sudo apt install -y curl wget git build-essential
sudo dnf update -y
sudo dnf install -y curl wget git gcc gcc-c++ make

Install nginx-lua-prometheus module

Download and install the nginx-lua-prometheus module that enables Prometheus metrics collection from OpenResty Lua scripts.

cd /opt
sudo git clone https://github.com/knyar/nginx-lua-prometheus.git
sudo chown -R root:root nginx-lua-prometheus
sudo chmod -R 755 nginx-lua-prometheus

Configure OpenResty for metrics collection

Create a dedicated configuration file for Prometheus metrics collection and monitoring endpoints.

# Prometheus metrics configuration
lua_shared_dict prometheus_metrics 10M;
lua_package_path "/opt/nginx-lua-prometheus/?.lua;/usr/local/openresty/lualib/?.lua;;";

init_worker_by_lua_block {
    prometheus = require("prometheus").init("prometheus_metrics")
    
    -- HTTP request metrics
    metric_requests = prometheus:counter(
        "nginx_http_requests_total", "Number of HTTP requests", {"host", "status", "method"}
    )
    
    metric_latency = prometheus:histogram(
        "nginx_http_request_duration_seconds", "HTTP request latency", {"host"}
    )
    
    -- Connection metrics
    metric_connections = prometheus:gauge(
        "nginx_http_connections", "Number of HTTP connections", {"state"}
    )
    
    -- Upstream metrics
    metric_upstream_latency = prometheus:histogram(
        "nginx_upstream_response_time_seconds", "Upstream response time", {"upstream"}
    )
}

log_by_lua_block {
    local host = ngx.var.host or "unknown"
    local status = tostring(ngx.var.status)
    local method = ngx.var.request_method
    local request_time = tonumber(ngx.var.request_time)
    local upstream_response_time = tonumber(ngx.var.upstream_response_time)
    
    -- Record request metrics
    metric_requests:inc(1, {host, status, method})
    
    if request_time then
        metric_latency:observe(request_time, {host})
    end
    
    -- Record upstream metrics
    if upstream_response_time then
        local upstream_addr = ngx.var.upstream_addr or "unknown"
        metric_upstream_latency:observe(upstream_response_time, {upstream_addr})
    end
    
    -- Update connection metrics
    local connections_active = ngx.var.connections_active or 0
    local connections_reading = ngx.var.connections_reading or 0
    local connections_writing = ngx.var.connections_writing or 0
    local connections_waiting = ngx.var.connections_waiting or 0
    
    metric_connections:set(connections_active, {"active"})
    metric_connections:set(connections_reading, {"reading"})
    metric_connections:set(connections_writing, {"writing"})
    metric_connections:set(connections_waiting, {"waiting"})
}

Metrics endpoint

server { listen 9145; server_name localhost; access_log off; location /metrics { content_by_lua_block { prometheus:collect() } } location /nginx_status { stub_status on; access_log off; } }

Update main OpenResty configuration

Include the Prometheus configuration and enable stub_status module for additional metrics.

worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    use epoll;
    multi_accept on;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    
    # Include Prometheus metrics configuration
    include /etc/openresty/conf.d/prometheus.conf;
    
    # Log format for metrics
    log_format metrics '$remote_addr - $remote_user [$time_local] '
                      '"$request" $status $body_bytes_sent '
                      '"$http_referer" "$http_user_agent" '
                      'rt=$request_time uct="$upstream_connect_time" '
                      'uht="$upstream_header_time" urt="$upstream_response_time"';
    
    # Basic settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    
    # Include additional configurations
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Create example application server block

Configure a sample application with metrics collection enabled to test your monitoring setup.

server {
    listen 80;
    server_name example.com www.example.com;
    root /var/www/html;
    index index.html index.htm;
    
    access_log /var/log/nginx/example-app.access.log metrics;
    error_log /var/log/nginx/example-app.error.log;
    
    # Enable metrics collection for this server
    location / {
        try_files $uri $uri/ =404;
        
        # Add custom headers for monitoring
        add_header X-Response-Time $request_time always;
        add_header X-Upstream-Time $upstream_response_time always;
    }
    
    # Sample API endpoint with Lua metrics
    location /api/ {
        content_by_lua_block {
            local start_time = ngx.now()
            
            -- Your API logic here
            ngx.say('{"status": "ok", "timestamp": ' .. ngx.time() .. '}')
            
            -- Custom metric for API response time
            local response_time = ngx.now() - start_time
            ngx.var.api_response_time = response_time
        }
        
        add_header Content-Type application/json;
    }
}

Enable the site and test configuration

Enable the example application and verify that OpenResty configuration is valid.

sudo ln -sf /etc/nginx/sites-available/example-app /etc/nginx/sites-enabled/
sudo mkdir -p /var/www/html
echo '

OpenResty Monitoring Test

' | sudo tee /var/www/html/index.html sudo openresty -t

Install and configure Prometheus

Install Prometheus to collect and store metrics from your OpenResty instances.

wget https://github.com/prometheus/prometheus/releases/download/v2.45.0/prometheus-2.45.0.linux-amd64.tar.gz
tar xzf prometheus-2.45.0.linux-amd64.tar.gz
sudo mv prometheus-2.45.0.linux-amd64 /opt/prometheus
sudo useradd --no-create-home --shell /bin/false prometheus
sudo mkdir -p /etc/prometheus /var/lib/prometheus
sudo chown prometheus:prometheus /etc/prometheus /var/lib/prometheus
sudo chown -R prometheus:prometheus /opt/prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.45.0/prometheus-2.45.0.linux-amd64.tar.gz
tar xzf prometheus-2.45.0.linux-amd64.tar.gz
sudo mv prometheus-2.45.0.linux-amd64 /opt/prometheus
sudo useradd --no-create-home --shell /bin/false prometheus
sudo mkdir -p /etc/prometheus /var/lib/prometheus
sudo chown prometheus:prometheus /etc/prometheus /var/lib/prometheus
sudo chown -R prometheus:prometheus /opt/prometheus

Configure Prometheus for OpenResty metrics

Create Prometheus configuration to scrape metrics from your OpenResty metrics endpoint.

global:
  scrape_interval: 15s
  evaluation_interval: 15s

rule_files:
  - "openresty_alerts.yml"

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'openresty'
    static_configs:
      - targets: ['localhost:9145']
    metrics_path: '/metrics'
    scrape_interval: 5s
    scrape_timeout: 5s
    
  - job_name: 'nginx-status'
    static_configs:
      - targets: ['localhost:9145']
    metrics_path: '/nginx_status'
    scrape_interval: 10s

Create Prometheus systemd service

Set up Prometheus as a systemd service for automatic startup and management.

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/opt/prometheus/prometheus \
  --config.file /etc/prometheus/prometheus.yml \
  --storage.tsdb.path /var/lib/prometheus/ \
  --web.console.templates=/opt/prometheus/consoles \
  --web.console.libraries=/opt/prometheus/console_libraries \
  --web.listen-address=0.0.0.0:9090 \
  --web.enable-lifecycle
Restart=always

[Install]
WantedBy=multi-user.target

Install and configure Grafana

Install Grafana for creating dashboards and visualizing OpenResty metrics.

wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt update
sudo apt install -y grafana
cat <

Create OpenResty alerting rules

Configure Prometheus alerting rules for monitoring OpenResty performance and availability.

groups:
  • name: openresty_alerts
rules: - alert: OpenRestryDown expr: up{job="openresty"} == 0 for: 1m labels: severity: critical annotations: summary: "OpenResty instance is down" description: "OpenResty instance {{ $labels.instance }} has been down for more than 1 minute." - alert: HighRequestLatency expr: histogram_quantile(0.95, nginx_http_request_duration_seconds_bucket) > 1 for: 5m labels: severity: warning annotations: summary: "High request latency detected" description: "95th percentile request latency is {{ $value }}s for {{ $labels.host }}" - alert: HighErrorRate expr: rate(nginx_http_requests_total{status=~"5.."}[5m]) / rate(nginx_http_requests_total[5m]) > 0.1 for: 3m labels: severity: critical annotations: summary: "High error rate detected" description: "Error rate is {{ $value | humanizePercentage }} for {{ $labels.host }}" - alert: TooManyConnections expr: nginx_http_connections{state="active"} > 1000 for: 2m labels: severity: warning annotations: summary: "Too many active connections" description: "Active connections: {{ $value }} on {{ $labels.instance }}" - alert: UpstreamLatencyHigh expr: histogram_quantile(0.95, nginx_upstream_response_time_seconds_bucket) > 2 for: 5m labels: severity: warning annotations: summary: "High upstream latency" description: "95th percentile upstream response time is {{ $value }}s for {{ $labels.upstream }}"

Start and enable services

Start all monitoring services and enable them to start automatically on boot.

sudo systemctl daemon-reload
sudo systemctl enable --now prometheus
sudo systemctl enable --now grafana-server
sudo systemctl restart openresty

Verify services are running

sudo systemctl status prometheus sudo systemctl status grafana-server sudo systemctl status openresty

Configure firewall rules

Open necessary ports for Prometheus, Grafana, and OpenResty metrics endpoints.

sudo ufw allow 9090/tcp comment "Prometheus"
sudo ufw allow 3000/tcp comment "Grafana"
sudo ufw allow 9145/tcp comment "OpenResty Metrics"
sudo ufw allow 80/tcp comment "HTTP"
sudo ufw reload
sudo firewall-cmd --permanent --add-port=9090/tcp --comment="Prometheus"
sudo firewall-cmd --permanent --add-port=3000/tcp --comment="Grafana"
sudo firewall-cmd --permanent --add-port=9145/tcp --comment="OpenResty Metrics"
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --reload

Configure Grafana data source and dashboard

Set up Prometheus as a data source in Grafana and create an OpenResty monitoring dashboard.

# Create Grafana provisioning directory
sudo mkdir -p /etc/grafana/provisioning/{datasources,dashboards}

Create Prometheus datasource configuration

sudo tee /etc/grafana/provisioning/datasources/prometheus.yml > /dev/null <

Create OpenResty dashboard configuration

Create a comprehensive Grafana dashboard for monitoring OpenResty performance metrics.

{
  "dashboard": {
    "id": null,
    "title": "OpenResty Performance Monitoring",
    "tags": ["openresty", "nginx", "performance"],
    "timezone": "browser",
    "panels": [
      {
        "id": 1,
        "title": "Request Rate",
        "type": "graph",
        "targets": [
          {
            "expr": "rate(nginx_http_requests_total[5m])",
            "legendFormat": "{{ host }} - {{ method }}"
          }
        ],
        "yAxes": [
          {
            "label": "Requests/sec"
          }
        ],
        "gridPos": {"h": 8, "w": 12, "x": 0, "y": 0}
      },
      {
        "id": 2,
        "title": "Response Time (95th percentile)",
        "type": "graph",
        "targets": [
          {
            "expr": "histogram_quantile(0.95, nginx_http_request_duration_seconds_bucket)",
            "legendFormat": "95th percentile - {{ host }}"
          }
        ],
        "yAxes": [
          {
            "label": "Seconds"
          }
        ],
        "gridPos": {"h": 8, "w": 12, "x": 12, "y": 0}
      },
      {
        "id": 3,
        "title": "HTTP Status Codes",
        "type": "graph",
        "targets": [
          {
            "expr": "rate(nginx_http_requests_total[5m])",
            "legendFormat": "{{ status }} - {{ host }}"
          }
        ],
        "gridPos": {"h": 8, "w": 24, "x": 0, "y": 8}
      },
      {
        "id": 4,
        "title": "Active Connections",
        "type": "singlestat",
        "targets": [
          {
            "expr": "nginx_http_connections{state=\"active\"}",
            "legendFormat": "Active"
          }
        ],
        "gridPos": {"h": 4, "w": 6, "x": 0, "y": 16}
      },
      {
        "id": 5,
        "title": "Upstream Response Time",
        "type": "graph",
        "targets": [
          {
            "expr": "histogram_quantile(0.95, nginx_upstream_response_time_seconds_bucket)",
            "legendFormat": "95th percentile - {{ upstream }}"
          }
        ],
        "gridPos": {"h": 8, "w": 18, "x": 6, "y": 16}
      }
    ],
    "time": {
      "from": "now-1h",
      "to": "now"
    },
    "refresh": "5s"
  }
}

Set up dashboard provisioning

Configure Grafana to automatically load the OpenResty dashboard.

apiVersion: 1

providers:
  - name: 'OpenResty Dashboards'
    orgId: 1
    folder: 'OpenResty'
    type: file
    disableDeletion: false
    updateIntervalSeconds: 10
    allowUiUpdates: true
    options:
      path: /etc/grafana/provisioning/dashboards

Restart services and apply configuration

Restart all services to apply the new configurations and ensure everything is working properly.

sudo systemctl restart grafana-server
sudo systemctl restart prometheus
sudo systemctl reload openresty

Wait a moment for services to start

sleep 10

Generate some test traffic

curl -s http://localhost/ > /dev/null curl -s http://localhost/api/ > /dev/null

Verify your setup

Test that all components are working correctly and collecting metrics.

# Check OpenResty metrics endpoint
curl -s http://localhost:9145/metrics | head -20

Check nginx status endpoint

curl -s http://localhost:9145/nginx_status

Verify Prometheus is scraping metrics

curl -s "http://localhost:9090/api/v1/label/__name__/values" | grep nginx

Check Grafana is accessible

curl -I http://localhost:3000

View service status

sudo systemctl status openresty prometheus grafana-server
Note: Default Grafana login credentials are admin/admin. You'll be prompted to change the password on first login at http://localhost:3000.

Performance optimization

Optimize your monitoring setup for production environments with these additional configurations.

Configure log sampling for high-traffic sites

Implement log sampling to reduce overhead on high-traffic websites while maintaining monitoring accuracy.

# Add to your server blocks for high-traffic monitoring
set_by_lua_block $sample_request {
    -- Sample 10% of requests for detailed logging
    if math.random() < 0.1 then
        return "1"
    else
        return "0"
    end
}

Conditional logging based on sampling

access_log /var/log/nginx/sampled.log metrics if=$sample_request;

Configure metric retention and storage

Optimize Prometheus storage and retention for your monitoring requirements.

# Add to global section
global:
  scrape_interval: 15s
  evaluation_interval: 15s
  external_labels:
    cluster: 'openresty-production'
    region: 'us-east-1'

Storage configuration

Update your systemd service with:

--storage.tsdb.retention.time=15d

--storage.tsdb.retention.size=10GB

Common issues

SymptomCauseFix
Metrics endpoint returns 404Prometheus configuration not loadedCheck sudo openresty -t and verify include path
Lua module not found errorIncorrect lua_package_pathVerify path to nginx-lua-prometheus module
Prometheus cannot scrape metricsFirewall blocking port 9145Open port with sudo ufw allow 9145
Grafana dashboard shows no dataPrometheus datasource not configuredCheck datasource URL and Prometheus status
High memory usage in shared dictToo many unique label combinationsReduce metric cardinality or increase shared dict size
OpenResty won't start after configLua syntax error in metrics blockCheck error log and validate Lua syntax

Next steps

Automated install script

Run this to automate the entire setup

#openresty #prometheus #grafana #monitoring #nginx #lua #metrics #performance #alerting #observability

Need help?

Don't want to manage this yourself?

We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.

Talk to an engineer