Configure NGINX reverse proxy with load balancing and SSL termination

Intermediate 45 min Apr 19, 2026 178 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up NGINX as a reverse proxy with multiple backend servers, SSL termination, and health monitoring. Perfect for distributing traffic across application instances while handling encryption at the edge.

Prerequisites

  • Root access to the server
  • Domain name with DNS pointing to server
  • Multiple backend application servers

What this solves

NGINX reverse proxy with load balancing distributes incoming requests across multiple backend servers while terminating SSL connections at the proxy layer. This setup improves performance, provides redundancy, and simplifies certificate management by handling encryption in one place instead of on each backend server.

Step-by-step configuration

Update system packages

Start by updating your package manager to ensure you get the latest versions of all packages.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install NGINX

Install NGINX web server which will act as our reverse proxy and load balancer.

sudo apt install -y nginx
sudo dnf install -y nginx

Install SSL certificate tools

Install Certbot for Let's Encrypt certificates and OpenSSL for certificate management.

sudo apt install -y certbot python3-certbot-nginx openssl
sudo dnf install -y certbot python3-certbot-nginx openssl

Create upstream server configuration

Define your backend servers in an upstream block. This example uses three application servers on ports 3000, 3001, and 3002.

upstream app_backend {
    least_conn;
    server 203.0.113.10:3000 max_fails=3 fail_timeout=30s;
    server 203.0.113.11:3001 max_fails=3 fail_timeout=30s;
    server 203.0.113.12:3002 max_fails=3 fail_timeout=30s backup;
    keepalive 32;
}

Configure basic reverse proxy

Create the main server configuration that handles both HTTP and HTTPS traffic with proper proxy headers.

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;
    
    location /.well-known/acme-challenge/ {
        root /var/www/html;
    }
    
    location / {
        return 301 https://$server_name$request_uri;
    }
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;
    
    # SSL configuration will be added by certbot
    
    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;
    add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    
    # Proxy configuration
    location / {
        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 300;
        proxy_connect_timeout 300;
        proxy_send_timeout 300;
    }
    
    # Health check endpoint
    location /nginx-health {
        access_log off;
        return 200 "healthy";
        add_header Content-Type text/plain;
    }
}

Enable the site configuration

Enable the new site by creating a symbolic link and removing the default configuration.

sudo ln -s /etc/nginx/sites-available/app-proxy /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default

Configure NGINX for better performance

Optimize NGINX settings for reverse proxy performance and connection handling.

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 2048;
    use epoll;
    multi_accept on;
}

http {
    ##
    # Basic Settings
    ##
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 100M;
    server_names_hash_bucket_size 128;
    
    ##
    # Proxy Settings
    ##
    proxy_buffering on;
    proxy_buffer_size 128k;
    proxy_buffers 4 256k;
    proxy_busy_buffers_size 256k;
    
    ##
    # Gzip Settings
    ##
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml+rss
        application/atom+xml
        image/svg+xml;
    
    ##
    # Rate Limiting
    ##
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=api:10m rate=60r/m;
    
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    
    ##
    # SSL Settings
    ##
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers off;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    
    ##
    # Logging Settings
    ##
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for" '
                    'rt=$request_time uct="$upstream_connect_time" '
                    'uht="$upstream_header_time" urt="$upstream_response_time"';
    
    access_log /var/log/nginx/access.log main;
    error_log /var/log/nginx/error.log;
    
    ##
    # Virtual Host Configs
    ##
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Test NGINX configuration

Verify that your NGINX configuration is syntactically correct before starting the service.

sudo nginx -t

Start and enable NGINX

Start NGINX service and enable it to start automatically on system boot.

sudo systemctl enable --now nginx
sudo systemctl status nginx

Obtain SSL certificates

Use Certbot to obtain Let's Encrypt SSL certificates for your domain. Replace example.com with your actual domain.

sudo certbot --nginx -d example.com -d www.example.com --email admin@example.com --agree-tos --no-eff-email
Note: Ensure your domain's DNS records point to this server's IP address before running certbot.

Configure automatic certificate renewal

Set up automatic renewal for SSL certificates to prevent expiration.

sudo systemctl enable --now certbot.timer
sudo certbot renew --dry-run

Configure load balancing methods

Add advanced load balancing configuration with session persistence and custom health checks.

upstream app_backend {
    # Load balancing method: least_conn, ip_hash, or round_robin (default)
    least_conn;
    
    # Backend servers with health check parameters
    server 203.0.113.10:3000 max_fails=3 fail_timeout=30s weight=3;
    server 203.0.113.11:3001 max_fails=3 fail_timeout=30s weight=2;
    server 203.0.113.12:3002 max_fails=3 fail_timeout=30s weight=1 backup;
    
    # Connection pooling
    keepalive 32;
    keepalive_requests 1000;
    keepalive_timeout 60s;
}

Separate upstream for API traffic

upstream api_backend { ip_hash; # Session persistence for API server 203.0.113.13:8080 max_fails=2 fail_timeout=20s; server 203.0.113.14:8080 max_fails=2 fail_timeout=20s; keepalive 16; }

Add advanced routing and rate limiting

Configure different routing rules for API endpoints and implement rate limiting for protection.

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;
    
    # SSL configuration (managed by certbot)
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    
    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;
    add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    
    # Rate limiting for general traffic
    limit_req zone=general burst=20 nodelay;
    
    # API endpoints with different rate limiting
    location /api/ {
        limit_req zone=api burst=10 nodelay;
        proxy_pass http://api_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 120;
        proxy_connect_timeout 30;
        proxy_send_timeout 120;
    }
    
    # Static files with caching
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|pdf|txt)$ {
        proxy_pass http://app_backend;
        proxy_cache_valid 200 1h;
        expires 1h;
        add_header Cache-Control "public, immutable";
    }
    
    # WebSocket support
    location /ws/ {
        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 3600;
        proxy_send_timeout 3600;
    }
    
    # Default application routing
    location / {
        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 300;
        proxy_connect_timeout 30;
        proxy_send_timeout 300;
        
        # Custom error pages
        error_page 502 503 504 /50x.html;
    }
    
    # Health check endpoint
    location /nginx-health {
        access_log off;
        return 200 "healthy";
        add_header Content-Type text/plain;
    }
    
    # Custom error page
    location = /50x.html {
        root /var/www/html;
        internal;
    }
}

Create custom error page

Create a custom error page for when backend servers are unavailable.

sudo mkdir -p /var/www/html
sudo chown www-data:www-data /var/www/html



    Service Temporarily Unavailable
    


    

Service Temporarily Unavailable

We're experiencing technical difficulties. Please try again in a few minutes.

If the problem persists, contact support.

Set up health monitoring script

Create a script to monitor backend server health and automatically manage upstream servers.

#!/bin/bash

Backend servers to check

BACKENDS=( "203.0.113.10:3000" "203.0.113.11:3001" "203.0.113.12:3002" )

Health check endpoint on backend servers

HEALTH_PATH="/health"

Log file

LOG_FILE="/var/log/nginx/health-check.log"

Function to check backend health

check_backend() { local backend=$1 local url="http://${backend}${HEALTH_PATH}" if curl -f -s --max-time 5 "$url" > /dev/null 2>&1; then echo "$(date): $backend is healthy" >> "$LOG_FILE" return 0 else echo "$(date): $backend is unhealthy" >> "$LOG_FILE" return 1 fi }

Check all backends

for backend in "${BACKENDS[@]}"; do check_backend "$backend" done

Reload nginx if configuration changed

nginx -t && systemctl reload nginx
sudo chmod +x /usr/local/bin/nginx-health-check.sh
sudo chown root:root /usr/local/bin/nginx-health-check.sh

Schedule health checks

Set up automated health checks using systemd timer for continuous monitoring.

[Unit]
Description=NGINX Backend Health Check
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/nginx-health-check.sh
User=root
[Unit]
Description=Run NGINX Health Check every 30 seconds
Requires=nginx-health-check.service

[Timer]
OnCalendar=::0,30
Persistent=true

[Install]
WantedBy=timers.target
sudo systemctl daemon-reload
sudo systemctl enable --now nginx-health-check.timer
sudo systemctl status nginx-health-check.timer

Configure log rotation

Set up log rotation to prevent log files from consuming too much disk space.

/var/log/nginx/health-check.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    sharedscripts
    postrotate
        if [ -f /var/run/nginx.pid ]; then
            kill -USR1 cat /var/run/nginx.pid
        fi
    endscript
}

Reload NGINX configuration

Apply all configuration changes by testing and reloading NGINX.

sudo nginx -t
sudo systemctl reload nginx

Verify your setup

Test your reverse proxy configuration and SSL termination:

# Check NGINX status
sudo systemctl status nginx

Test SSL certificate

curl -I https://example.com

Check backend connectivity

curl -H "Host: example.com" http://localhost

Verify SSL grade

ssl-cert-check -s example.com -p 443

Test health check endpoint

curl https://example.com/nginx-health

Monitor access logs

sudo tail -f /var/log/nginx/access.log

Check upstream status (requires nginx-module-http-upstream-check)

nginx -T | grep upstream

You can also monitor your reverse proxy performance and backend health with proper observability tools. The NGINX monitoring with Prometheus and Grafana tutorial shows how to set up comprehensive monitoring dashboards.

Load balancing methods

NGINX supports several load balancing algorithms you can configure in your upstream block:

MethodDescriptionUse Case
round_robin (default)Distributes requests evenly across serversServers with equal capacity
least_connRoutes to server with fewest active connectionsRequests with varying processing times
ip_hashRoutes based on client IP hashSession persistence required
hashRoutes based on custom keyContent-based routing
least_timeRoutes to server with lowest response time (NGINX Plus)Performance-sensitive applications

Common issues

SymptomCauseFix
502 Bad Gateway errorsBackend servers unavailableCheck backend services: curl http://backend:port
SSL certificate errorsCertificate expired or misconfiguredRenew with: sudo certbot renew
High response timesInsufficient upstream connectionsIncrease keepalive and proxy_buffers
Rate limit false positivesLegitimate traffic blockedAdjust burst parameter in rate limiting
WebSocket connections failMissing upgrade headersVerify proxy_set_header Upgrade configuration
Client real IP missingProxy headers not forwardedEnsure X-Forwarded-For headers are set

Next steps

Running this in production?

Want this handled for you? Setting up NGINX reverse proxy once is straightforward. Keeping it patched, monitored, backed up and tuned across environments is the harder part. See how we run infrastructure like this for European SaaS and e-commerce teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed cloud infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.