Configure comprehensive monitoring for OpenResty web server using nginx-lua-prometheus module to collect metrics, Prometheus for storage, and Grafana for visualization with custom dashboards and alerting rules.
Prerequisites
- Root or sudo access
- OpenResty web server installed
- Basic understanding of Lua scripting
- 4GB+ RAM recommended
What this solves
OpenResty monitoring provides real-time insights into your web server performance, request patterns, and resource utilization. This tutorial shows you how to configure OpenResty with Lua-based Prometheus metrics collection, set up Prometheus to scrape these metrics, and create Grafana dashboards for comprehensive performance analytics and alerting.
Prerequisites
You'll need root access to your server and basic familiarity with OpenResty configuration. If you haven't installed OpenResty yet, follow our OpenResty installation guide first.
Step-by-step configuration
Update system packages
Start by updating your package manager to ensure you get the latest versions of required packages.
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl wget git build-essential
Install nginx-lua-prometheus module
Download and install the nginx-lua-prometheus module that enables Prometheus metrics collection from OpenResty Lua scripts.
cd /opt
sudo git clone https://github.com/knyar/nginx-lua-prometheus.git
sudo chown -R root:root nginx-lua-prometheus
sudo chmod -R 755 nginx-lua-prometheus
Configure OpenResty for metrics collection
Create a dedicated configuration file for Prometheus metrics collection and monitoring endpoints.
# Prometheus metrics configuration
lua_shared_dict prometheus_metrics 10M;
lua_package_path "/opt/nginx-lua-prometheus/?.lua;/usr/local/openresty/lualib/?.lua;;";
init_worker_by_lua_block {
prometheus = require("prometheus").init("prometheus_metrics")
-- HTTP request metrics
metric_requests = prometheus:counter(
"nginx_http_requests_total", "Number of HTTP requests", {"host", "status", "method"}
)
metric_latency = prometheus:histogram(
"nginx_http_request_duration_seconds", "HTTP request latency", {"host"}
)
-- Connection metrics
metric_connections = prometheus:gauge(
"nginx_http_connections", "Number of HTTP connections", {"state"}
)
-- Upstream metrics
metric_upstream_latency = prometheus:histogram(
"nginx_upstream_response_time_seconds", "Upstream response time", {"upstream"}
)
}
log_by_lua_block {
local host = ngx.var.host or "unknown"
local status = tostring(ngx.var.status)
local method = ngx.var.request_method
local request_time = tonumber(ngx.var.request_time)
local upstream_response_time = tonumber(ngx.var.upstream_response_time)
-- Record request metrics
metric_requests:inc(1, {host, status, method})
if request_time then
metric_latency:observe(request_time, {host})
end
-- Record upstream metrics
if upstream_response_time then
local upstream_addr = ngx.var.upstream_addr or "unknown"
metric_upstream_latency:observe(upstream_response_time, {upstream_addr})
end
-- Update connection metrics
local connections_active = ngx.var.connections_active or 0
local connections_reading = ngx.var.connections_reading or 0
local connections_writing = ngx.var.connections_writing or 0
local connections_waiting = ngx.var.connections_waiting or 0
metric_connections:set(connections_active, {"active"})
metric_connections:set(connections_reading, {"reading"})
metric_connections:set(connections_writing, {"writing"})
metric_connections:set(connections_waiting, {"waiting"})
}
Metrics endpoint
server {
listen 9145;
server_name localhost;
access_log off;
location /metrics {
content_by_lua_block {
prometheus:collect()
}
}
location /nginx_status {
stub_status on;
access_log off;
}
}
Update main OpenResty configuration
Include the Prometheus configuration and enable stub_status module for additional metrics.
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Include Prometheus metrics configuration
include /etc/openresty/conf.d/prometheus.conf;
# Log format for metrics
log_format metrics '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Include additional configurations
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Create example application server block
Configure a sample application with metrics collection enabled to test your monitoring setup.
server {
listen 80;
server_name example.com www.example.com;
root /var/www/html;
index index.html index.htm;
access_log /var/log/nginx/example-app.access.log metrics;
error_log /var/log/nginx/example-app.error.log;
# Enable metrics collection for this server
location / {
try_files $uri $uri/ =404;
# Add custom headers for monitoring
add_header X-Response-Time $request_time always;
add_header X-Upstream-Time $upstream_response_time always;
}
# Sample API endpoint with Lua metrics
location /api/ {
content_by_lua_block {
local start_time = ngx.now()
-- Your API logic here
ngx.say('{"status": "ok", "timestamp": ' .. ngx.time() .. '}')
-- Custom metric for API response time
local response_time = ngx.now() - start_time
ngx.var.api_response_time = response_time
}
add_header Content-Type application/json;
}
}
Enable the site and test configuration
Enable the example application and verify that OpenResty configuration is valid.
sudo ln -sf /etc/nginx/sites-available/example-app /etc/nginx/sites-enabled/
sudo mkdir -p /var/www/html
echo 'OpenResty Monitoring Test
' | sudo tee /var/www/html/index.html
sudo openresty -t
Install and configure Prometheus
Install Prometheus to collect and store metrics from your OpenResty instances.
wget https://github.com/prometheus/prometheus/releases/download/v2.45.0/prometheus-2.45.0.linux-amd64.tar.gz
tar xzf prometheus-2.45.0.linux-amd64.tar.gz
sudo mv prometheus-2.45.0.linux-amd64 /opt/prometheus
sudo useradd --no-create-home --shell /bin/false prometheus
sudo mkdir -p /etc/prometheus /var/lib/prometheus
sudo chown prometheus:prometheus /etc/prometheus /var/lib/prometheus
sudo chown -R prometheus:prometheus /opt/prometheus
Configure Prometheus for OpenResty metrics
Create Prometheus configuration to scrape metrics from your OpenResty metrics endpoint.
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
- "openresty_alerts.yml"
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'openresty'
static_configs:
- targets: ['localhost:9145']
metrics_path: '/metrics'
scrape_interval: 5s
scrape_timeout: 5s
- job_name: 'nginx-status'
static_configs:
- targets: ['localhost:9145']
metrics_path: '/nginx_status'
scrape_interval: 10s
Create Prometheus systemd service
Set up Prometheus as a systemd service for automatic startup and management.
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/opt/prometheus/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/opt/prometheus/consoles \
--web.console.libraries=/opt/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle
Restart=always
[Install]
WantedBy=multi-user.target
Install and configure Grafana
Install Grafana for creating dashboards and visualizing OpenResty metrics.
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt update
sudo apt install -y grafana
Create OpenResty alerting rules
Configure Prometheus alerting rules for monitoring OpenResty performance and availability.
groups:
- name: openresty_alerts
rules:
- alert: OpenRestryDown
expr: up{job="openresty"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "OpenResty instance is down"
description: "OpenResty instance {{ $labels.instance }} has been down for more than 1 minute."
- alert: HighRequestLatency
expr: histogram_quantile(0.95, nginx_http_request_duration_seconds_bucket) > 1
for: 5m
labels:
severity: warning
annotations:
summary: "High request latency detected"
description: "95th percentile request latency is {{ $value }}s for {{ $labels.host }}"
- alert: HighErrorRate
expr: rate(nginx_http_requests_total{status=~"5.."}[5m]) / rate(nginx_http_requests_total[5m]) > 0.1
for: 3m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value | humanizePercentage }} for {{ $labels.host }}"
- alert: TooManyConnections
expr: nginx_http_connections{state="active"} > 1000
for: 2m
labels:
severity: warning
annotations:
summary: "Too many active connections"
description: "Active connections: {{ $value }} on {{ $labels.instance }}"
- alert: UpstreamLatencyHigh
expr: histogram_quantile(0.95, nginx_upstream_response_time_seconds_bucket) > 2
for: 5m
labels:
severity: warning
annotations:
summary: "High upstream latency"
description: "95th percentile upstream response time is {{ $value }}s for {{ $labels.upstream }}"
Start and enable services
Start all monitoring services and enable them to start automatically on boot.
sudo systemctl daemon-reload
sudo systemctl enable --now prometheus
sudo systemctl enable --now grafana-server
sudo systemctl restart openresty
Verify services are running
sudo systemctl status prometheus
sudo systemctl status grafana-server
sudo systemctl status openresty
Configure firewall rules
Open necessary ports for Prometheus, Grafana, and OpenResty metrics endpoints.
sudo ufw allow 9090/tcp comment "Prometheus"
sudo ufw allow 3000/tcp comment "Grafana"
sudo ufw allow 9145/tcp comment "OpenResty Metrics"
sudo ufw allow 80/tcp comment "HTTP"
sudo ufw reload
Configure Grafana data source and dashboard
Set up Prometheus as a data source in Grafana and create an OpenResty monitoring dashboard.
# Create Grafana provisioning directory
sudo mkdir -p /etc/grafana/provisioning/{datasources,dashboards}
Create Prometheus datasource configuration
sudo tee /etc/grafana/provisioning/datasources/prometheus.yml > /dev/null <
Create OpenResty dashboard configuration
Create a comprehensive Grafana dashboard for monitoring OpenResty performance metrics.
{
"dashboard": {
"id": null,
"title": "OpenResty Performance Monitoring",
"tags": ["openresty", "nginx", "performance"],
"timezone": "browser",
"panels": [
{
"id": 1,
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "rate(nginx_http_requests_total[5m])",
"legendFormat": "{{ host }} - {{ method }}"
}
],
"yAxes": [
{
"label": "Requests/sec"
}
],
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 0}
},
{
"id": 2,
"title": "Response Time (95th percentile)",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, nginx_http_request_duration_seconds_bucket)",
"legendFormat": "95th percentile - {{ host }}"
}
],
"yAxes": [
{
"label": "Seconds"
}
],
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 0}
},
{
"id": 3,
"title": "HTTP Status Codes",
"type": "graph",
"targets": [
{
"expr": "rate(nginx_http_requests_total[5m])",
"legendFormat": "{{ status }} - {{ host }}"
}
],
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 8}
},
{
"id": 4,
"title": "Active Connections",
"type": "singlestat",
"targets": [
{
"expr": "nginx_http_connections{state=\"active\"}",
"legendFormat": "Active"
}
],
"gridPos": {"h": 4, "w": 6, "x": 0, "y": 16}
},
{
"id": 5,
"title": "Upstream Response Time",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, nginx_upstream_response_time_seconds_bucket)",
"legendFormat": "95th percentile - {{ upstream }}"
}
],
"gridPos": {"h": 8, "w": 18, "x": 6, "y": 16}
}
],
"time": {
"from": "now-1h",
"to": "now"
},
"refresh": "5s"
}
}
Set up dashboard provisioning
Configure Grafana to automatically load the OpenResty dashboard.
apiVersion: 1
providers:
- name: 'OpenResty Dashboards'
orgId: 1
folder: 'OpenResty'
type: file
disableDeletion: false
updateIntervalSeconds: 10
allowUiUpdates: true
options:
path: /etc/grafana/provisioning/dashboards
Restart services and apply configuration
Restart all services to apply the new configurations and ensure everything is working properly.
sudo systemctl restart grafana-server
sudo systemctl restart prometheus
sudo systemctl reload openresty
Wait a moment for services to start
sleep 10
Generate some test traffic
curl -s http://localhost/ > /dev/null
curl -s http://localhost/api/ > /dev/null
Verify your setup
Test that all components are working correctly and collecting metrics.
# Check OpenResty metrics endpoint
curl -s http://localhost:9145/metrics | head -20
Check nginx status endpoint
curl -s http://localhost:9145/nginx_status
Verify Prometheus is scraping metrics
curl -s "http://localhost:9090/api/v1/label/__name__/values" | grep nginx
Check Grafana is accessible
curl -I http://localhost:3000
View service status
sudo systemctl status openresty prometheus grafana-server
Performance optimization
Optimize your monitoring setup for production environments with these additional configurations.
Configure log sampling for high-traffic sites
Implement log sampling to reduce overhead on high-traffic websites while maintaining monitoring accuracy.
# Add to your server blocks for high-traffic monitoring
set_by_lua_block $sample_request {
-- Sample 10% of requests for detailed logging
if math.random() < 0.1 then
return "1"
else
return "0"
end
}
Conditional logging based on sampling
access_log /var/log/nginx/sampled.log metrics if=$sample_request;
Configure metric retention and storage
Optimize Prometheus storage and retention for your monitoring requirements.
# Add to global section
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
cluster: 'openresty-production'
region: 'us-east-1'
Storage configuration
Update your systemd service with:
--storage.tsdb.retention.time=15d
--storage.tsdb.retention.size=10GB
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| Metrics endpoint returns 404 | Prometheus configuration not loaded | Check sudo openresty -t and verify include path |
| Lua module not found error | Incorrect lua_package_path | Verify path to nginx-lua-prometheus module |
| Prometheus cannot scrape metrics | Firewall blocking port 9145 | Open port with sudo ufw allow 9145 |
| Grafana dashboard shows no data | Prometheus datasource not configured | Check datasource URL and Prometheus status |
| High memory usage in shared dict | Too many unique label combinations | Reduce metric cardinality or increase shared dict size |
| OpenResty won't start after config | Lua syntax error in metrics block | Check error log and validate Lua syntax |
Next steps
- Set up centralized log aggregation with ELK Stack for comprehensive log analysis
- Implement OpenResty JWT authentication to secure your monitoring endpoints
- Configure OpenResty load balancing with health checks
- Set up Alertmanager for monitoring alerts
- Optimize OpenResty performance with advanced caching strategies
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Default values
METRICS_PORT="${1:-9145}"
OPENRESTY_USER="nginx"
# Usage message
usage() {
echo "Usage: $0 [METRICS_PORT]"
echo " METRICS_PORT: Port for Prometheus metrics endpoint (default: 9145)"
exit 1
}
# Logging functions
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Cleanup function
cleanup() {
log_error "Installation failed. Cleaning up..."
if [[ -d /opt/nginx-lua-prometheus ]]; then
rm -rf /opt/nginx-lua-prometheus
fi
if [[ -f /etc/nginx/conf.d/prometheus-metrics.conf ]]; then
rm -f /etc/nginx/conf.d/prometheus-metrics.conf
fi
}
trap cleanup ERR
# Check root privileges
if [[ $EUID -ne 0 ]]; then
log_error "This script must be run as root"
exit 1
fi
# Validate port argument
if [[ $# -gt 1 ]]; then
usage
fi
if ! [[ "$METRICS_PORT" =~ ^[0-9]+$ ]] || [[ "$METRICS_PORT" -lt 1024 ]] || [[ "$METRICS_PORT" -gt 65535 ]]; then
log_error "Invalid port number. Please use a port between 1024-65535"
exit 1
fi
log_info "[1/8] Detecting distribution and package manager..."
# Auto-detect distribution
if [[ -f /etc/os-release ]]; then
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_UPDATE="apt update"
PKG_INSTALL="apt install -y"
OPENRESTY_CONF_DIR="/etc/nginx"
OPENRESTY_USER="www-data"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_UPDATE="dnf update -y"
PKG_INSTALL="dnf install -y"
OPENRESTY_CONF_DIR="/etc/nginx"
OPENRESTY_USER="nginx"
;;
amzn)
PKG_MGR="yum"
PKG_UPDATE="yum update -y"
PKG_INSTALL="yum install -y"
OPENRESTY_CONF_DIR="/etc/nginx"
OPENRESTY_USER="nginx"
;;
*)
log_error "Unsupported distribution: $ID"
exit 1
;;
esac
else
log_error "Cannot detect distribution. /etc/os-release not found."
exit 1
fi
log_info "Detected distribution: $PRETTY_NAME"
log_info "Using package manager: $PKG_MGR"
# Check if OpenResty is installed
log_info "[2/8] Checking OpenResty installation..."
if ! command -v openresty >/dev/null 2>&1 && ! command -v nginx >/dev/null 2>&1; then
log_error "OpenResty or Nginx not found. Please install OpenResty first."
exit 1
fi
# Update system packages
log_info "[3/8] Updating system packages..."
$PKG_UPDATE
# Install required packages
log_info "[4/8] Installing required packages..."
$PKG_INSTALL curl wget git
if [[ "$PKG_MGR" == "apt" ]]; then
$PKG_INSTALL build-essential
else
$PKG_INSTALL gcc gcc-c++ make
fi
# Download nginx-lua-prometheus module
log_info "[5/8] Installing nginx-lua-prometheus module..."
if [[ -d /opt/nginx-lua-prometheus ]]; then
rm -rf /opt/nginx-lua-prometheus
fi
cd /opt
git clone https://github.com/knyar/nginx-lua-prometheus.git
chown -R root:root nginx-lua-prometheus
chmod -R 755 nginx-lua-prometheus
# Create prometheus metrics configuration
log_info "[6/8] Creating Prometheus metrics configuration..."
mkdir -p "${OPENRESTY_CONF_DIR}/conf.d"
cat > "${OPENRESTY_CONF_DIR}/conf.d/prometheus-metrics.conf" << EOF
# Prometheus metrics configuration
lua_shared_dict prometheus_metrics 10M;
lua_package_path "/opt/nginx-lua-prometheus/?.lua;/usr/local/openresty/lualib/?.lua;;";
init_worker_by_lua_block {
prometheus = require("prometheus").init("prometheus_metrics")
-- HTTP request metrics
metric_requests = prometheus:counter(
"nginx_http_requests_total", "Number of HTTP requests", {"host", "status", "method"}
)
metric_latency = prometheus:histogram(
"nginx_http_request_duration_seconds", "HTTP request latency", {"host"}
)
-- Connection metrics
metric_connections = prometheus:gauge(
"nginx_http_connections", "Number of HTTP connections", {"state"}
)
-- Upstream metrics
metric_upstream_latency = prometheus:histogram(
"nginx_upstream_response_time_seconds", "Upstream response time", {"upstream"}
)
}
log_by_lua_block {
local host = ngx.var.host or "unknown"
local status = tostring(ngx.var.status)
local method = ngx.var.request_method
local request_time = tonumber(ngx.var.request_time)
local upstream_response_time = tonumber(ngx.var.upstream_response_time)
-- Record request metrics
metric_requests:inc(1, {host, status, method})
if request_time then
metric_latency:observe(request_time, {host})
end
-- Record upstream metrics
if upstream_response_time then
local upstream_addr = ngx.var.upstream_addr or "unknown"
metric_upstream_latency:observe(upstream_response_time, {upstream_addr})
end
-- Update connection metrics
local connections_active = ngx.var.connections_active or 0
local connections_reading = ngx.var.connections_reading or 0
local connections_writing = ngx.var.connections_writing or 0
local connections_waiting = ngx.var.connections_waiting or 0
metric_connections:set(connections_active, {"active"})
metric_connections:set(connections_reading, {"reading"})
metric_connections:set(connections_writing, {"writing"})
metric_connections:set(connections_waiting, {"waiting"})
}
# Metrics endpoint server
server {
listen ${METRICS_PORT};
server_name localhost;
access_log off;
location /metrics {
content_by_lua_block {
prometheus:collect()
}
allow 127.0.0.1;
allow ::1;
deny all;
}
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow ::1;
deny all;
}
}
EOF
chown root:root "${OPENRESTY_CONF_DIR}/conf.d/prometheus-metrics.conf"
chmod 644 "${OPENRESTY_CONF_DIR}/conf.d/prometheus-metrics.conf"
# Configure firewall
log_info "[7/8] Configuring firewall..."
if command -v firewall-cmd >/dev/null 2>&1 && systemctl is-active firewalld >/dev/null 2>&1; then
firewall-cmd --permanent --add-port="${METRICS_PORT}/tcp" --zone=internal
firewall-cmd --reload
log_info "Firewall configured for port $METRICS_PORT (internal zone)"
elif command -v ufw >/dev/null 2>&1 && ufw status | grep -q "Status: active"; then
ufw allow from 127.0.0.1 to any port "$METRICS_PORT"
ufw allow from ::1 to any port "$METRICS_PORT"
log_info "UFW configured for port $METRICS_PORT (localhost only)"
fi
# Test configuration and restart service
log_info "[8/8] Testing configuration and restarting OpenResty..."
if command -v openresty >/dev/null 2>&1; then
openresty -t
systemctl restart openresty || systemctl restart nginx
systemctl enable openresty || systemctl enable nginx
else
nginx -t
systemctl restart nginx
systemctl enable nginx
fi
# Verification checks
log_info "Verifying installation..."
# Check if service is running
if systemctl is-active openresty >/dev/null 2>&1 || systemctl is-active nginx >/dev/null 2>&1; then
log_info "✓ OpenResty/Nginx is running"
else
log_error "✗ OpenResty/Nginx is not running"
exit 1
fi
# Check if metrics endpoint is accessible
sleep 2
if curl -s "http://localhost:${METRICS_PORT}/metrics" >/dev/null; then
log_info "✓ Metrics endpoint is accessible at http://localhost:${METRICS_PORT}/metrics"
else
log_warn "⚠ Metrics endpoint may not be ready yet. Check manually: curl http://localhost:${METRICS_PORT}/metrics"
fi
# Check if stub_status is accessible
if curl -s "http://localhost:${METRICS_PORT}/nginx_status" >/dev/null; then
log_info "✓ Nginx status endpoint is accessible at http://localhost:${METRICS_PORT}/nginx_status"
else
log_warn "⚠ Nginx status endpoint may not be ready yet"
fi
log_info "OpenResty Prometheus monitoring setup completed successfully!"
log_info ""
log_info "Next steps:"
log_info "1. Configure Prometheus to scrape metrics from http://localhost:${METRICS_PORT}/metrics"
log_info "2. Import Grafana dashboards for OpenResty/Nginx monitoring"
log_info "3. Generate some traffic to see metrics in action"
Review the script before running. Execute with: bash install.sh