Set up NGINX as a reverse proxy with multiple backend servers, SSL termination, and health monitoring. Perfect for distributing traffic across application instances while handling encryption at the edge.
Prerequisites
- Root access to the server
- Domain name with DNS pointing to server
- Multiple backend application servers
What this solves
NGINX reverse proxy with load balancing distributes incoming requests across multiple backend servers while terminating SSL connections at the proxy layer. This setup improves performance, provides redundancy, and simplifies certificate management by handling encryption in one place instead of on each backend server.
Step-by-step configuration
Update system packages
Start by updating your package manager to ensure you get the latest versions of all packages.
sudo apt update && sudo apt upgrade -y
Install NGINX
Install NGINX web server which will act as our reverse proxy and load balancer.
sudo apt install -y nginx
Install SSL certificate tools
Install Certbot for Let's Encrypt certificates and OpenSSL for certificate management.
sudo apt install -y certbot python3-certbot-nginx openssl
Create upstream server configuration
Define your backend servers in an upstream block. This example uses three application servers on ports 3000, 3001, and 3002.
upstream app_backend {
least_conn;
server 203.0.113.10:3000 max_fails=3 fail_timeout=30s;
server 203.0.113.11:3001 max_fails=3 fail_timeout=30s;
server 203.0.113.12:3002 max_fails=3 fail_timeout=30s backup;
keepalive 32;
}
Configure basic reverse proxy
Create the main server configuration that handles both HTTP and HTTPS traffic with proper proxy headers.
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location /.well-known/acme-challenge/ {
root /var/www/html;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
# SSL configuration will be added by certbot
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Proxy configuration
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
# Health check endpoint
location /nginx-health {
access_log off;
return 200 "healthy";
add_header Content-Type text/plain;
}
}
Enable the site configuration
Enable the new site by creating a symbolic link and removing the default configuration.
sudo ln -s /etc/nginx/sites-available/app-proxy /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
Configure NGINX for better performance
Optimize NGINX settings for reverse proxy performance and connection handling.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 2048;
use epoll;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 100M;
server_names_hash_bucket_size 128;
##
# Proxy Settings
##
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
##
# Gzip Settings
##
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
##
# Rate Limiting
##
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=api:10m rate=60r/m;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
##
# Logging Settings
##
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Test NGINX configuration
Verify that your NGINX configuration is syntactically correct before starting the service.
sudo nginx -t
Start and enable NGINX
Start NGINX service and enable it to start automatically on system boot.
sudo systemctl enable --now nginx
sudo systemctl status nginx
Obtain SSL certificates
Use Certbot to obtain Let's Encrypt SSL certificates for your domain. Replace example.com with your actual domain.
sudo certbot --nginx -d example.com -d www.example.com --email admin@example.com --agree-tos --no-eff-email
Configure automatic certificate renewal
Set up automatic renewal for SSL certificates to prevent expiration.
sudo systemctl enable --now certbot.timer
sudo certbot renew --dry-run
Configure load balancing methods
Add advanced load balancing configuration with session persistence and custom health checks.
upstream app_backend {
# Load balancing method: least_conn, ip_hash, or round_robin (default)
least_conn;
# Backend servers with health check parameters
server 203.0.113.10:3000 max_fails=3 fail_timeout=30s weight=3;
server 203.0.113.11:3001 max_fails=3 fail_timeout=30s weight=2;
server 203.0.113.12:3002 max_fails=3 fail_timeout=30s weight=1 backup;
# Connection pooling
keepalive 32;
keepalive_requests 1000;
keepalive_timeout 60s;
}
Separate upstream for API traffic
upstream api_backend {
ip_hash; # Session persistence for API
server 203.0.113.13:8080 max_fails=2 fail_timeout=20s;
server 203.0.113.14:8080 max_fails=2 fail_timeout=20s;
keepalive 16;
}
Add advanced routing and rate limiting
Configure different routing rules for API endpoints and implement rate limiting for protection.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
# SSL configuration (managed by certbot)
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Rate limiting for general traffic
limit_req zone=general burst=20 nodelay;
# API endpoints with different rate limiting
location /api/ {
limit_req zone=api burst=10 nodelay;
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 120;
proxy_connect_timeout 30;
proxy_send_timeout 120;
}
# Static files with caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js|pdf|txt)$ {
proxy_pass http://app_backend;
proxy_cache_valid 200 1h;
expires 1h;
add_header Cache-Control "public, immutable";
}
# WebSocket support
location /ws/ {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600;
proxy_send_timeout 3600;
}
# Default application routing
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300;
proxy_connect_timeout 30;
proxy_send_timeout 300;
# Custom error pages
error_page 502 503 504 /50x.html;
}
# Health check endpoint
location /nginx-health {
access_log off;
return 200 "healthy";
add_header Content-Type text/plain;
}
# Custom error page
location = /50x.html {
root /var/www/html;
internal;
}
}
Create custom error page
Create a custom error page for when backend servers are unavailable.
sudo mkdir -p /var/www/html
sudo chown www-data:www-data /var/www/html
Service Temporarily Unavailable
Service Temporarily Unavailable
We're experiencing technical difficulties. Please try again in a few minutes.
If the problem persists, contact support.
Set up health monitoring script
Create a script to monitor backend server health and automatically manage upstream servers.
#!/bin/bash
Backend servers to check
BACKENDS=(
"203.0.113.10:3000"
"203.0.113.11:3001"
"203.0.113.12:3002"
)
Health check endpoint on backend servers
HEALTH_PATH="/health"
Log file
LOG_FILE="/var/log/nginx/health-check.log"
Function to check backend health
check_backend() {
local backend=$1
local url="http://${backend}${HEALTH_PATH}"
if curl -f -s --max-time 5 "$url" > /dev/null 2>&1; then
echo "$(date): $backend is healthy" >> "$LOG_FILE"
return 0
else
echo "$(date): $backend is unhealthy" >> "$LOG_FILE"
return 1
fi
}
Check all backends
for backend in "${BACKENDS[@]}"; do
check_backend "$backend"
done
Reload nginx if configuration changed
nginx -t && systemctl reload nginx
sudo chmod +x /usr/local/bin/nginx-health-check.sh
sudo chown root:root /usr/local/bin/nginx-health-check.sh
Schedule health checks
Set up automated health checks using systemd timer for continuous monitoring.
[Unit]
Description=NGINX Backend Health Check
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/nginx-health-check.sh
User=root
[Unit]
Description=Run NGINX Health Check every 30 seconds
Requires=nginx-health-check.service
[Timer]
OnCalendar=::0,30
Persistent=true
[Install]
WantedBy=timers.target
sudo systemctl daemon-reload
sudo systemctl enable --now nginx-health-check.timer
sudo systemctl status nginx-health-check.timer
Configure log rotation
Set up log rotation to prevent log files from consuming too much disk space.
/var/log/nginx/health-check.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
sharedscripts
postrotate
if [ -f /var/run/nginx.pid ]; then
kill -USR1 cat /var/run/nginx.pid
fi
endscript
}
Reload NGINX configuration
Apply all configuration changes by testing and reloading NGINX.
sudo nginx -t
sudo systemctl reload nginx
Verify your setup
Test your reverse proxy configuration and SSL termination:
# Check NGINX status
sudo systemctl status nginx
Test SSL certificate
curl -I https://example.com
Check backend connectivity
curl -H "Host: example.com" http://localhost
Verify SSL grade
ssl-cert-check -s example.com -p 443
Test health check endpoint
curl https://example.com/nginx-health
Monitor access logs
sudo tail -f /var/log/nginx/access.log
Check upstream status (requires nginx-module-http-upstream-check)
nginx -T | grep upstream
You can also monitor your reverse proxy performance and backend health with proper observability tools. The NGINX monitoring with Prometheus and Grafana tutorial shows how to set up comprehensive monitoring dashboards.
Load balancing methods
NGINX supports several load balancing algorithms you can configure in your upstream block:
| Method | Description | Use Case |
|---|---|---|
| round_robin (default) | Distributes requests evenly across servers | Servers with equal capacity |
| least_conn | Routes to server with fewest active connections | Requests with varying processing times |
| ip_hash | Routes based on client IP hash | Session persistence required |
| hash | Routes based on custom key | Content-based routing |
| least_time | Routes to server with lowest response time (NGINX Plus) | Performance-sensitive applications |
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| 502 Bad Gateway errors | Backend servers unavailable | Check backend services: curl http://backend:port |
| SSL certificate errors | Certificate expired or misconfigured | Renew with: sudo certbot renew |
| High response times | Insufficient upstream connections | Increase keepalive and proxy_buffers |
| Rate limit false positives | Legitimate traffic blocked | Adjust burst parameter in rate limiting |
| WebSocket connections fail | Missing upgrade headers | Verify proxy_set_header Upgrade configuration |
| Client real IP missing | Proxy headers not forwarded | Ensure X-Forwarded-For headers are set |
Next steps
- Set up NGINX monitoring with Prometheus and Grafana for comprehensive observability
- Set up NGINX log analysis and monitoring to track performance metrics
- Configure NGINX rate limiting and advanced security rules for DDoS protection
- Configure NGINX SSL security hardening with perfect forward secrecy
- Set up NGINX caching optimization for better performance
Running this in production?
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Default values
DOMAIN=""
BACKEND_SERVERS=""
EMAIL=""
# Usage function
usage() {
echo "Usage: $0 -d <domain> -b <backend_servers> -e <email>"
echo "Example: $0 -d example.com -b '192.168.1.10:3000,192.168.1.11:3001,192.168.1.12:3002' -e admin@example.com"
exit 1
}
# Parse arguments
while getopts "d:b:e:h" opt; do
case $opt in
d) DOMAIN="$OPTARG" ;;
b) BACKEND_SERVERS="$OPTARG" ;;
e) EMAIL="$OPTARG" ;;
h) usage ;;
*) usage ;;
esac
done
# Check required arguments
if [[ -z "$DOMAIN" || -z "$BACKEND_SERVERS" || -z "$EMAIL" ]]; then
echo -e "${RED}Error: Missing required arguments${NC}"
usage
fi
# Error handling and cleanup
cleanup() {
echo -e "${RED}Installation failed. Rolling back changes...${NC}"
systemctl stop nginx 2>/dev/null || true
systemctl disable nginx 2>/dev/null || true
rm -f "$NGINX_SITE_CONFIG" 2>/dev/null || true
rm -f "$NGINX_SITE_ENABLED" 2>/dev/null || true
}
trap cleanup ERR
# Check if running as root or with sudo
if [[ $EUID -ne 0 ]]; then
echo -e "${RED}Error: This script must be run as root or with sudo${NC}"
exit 1
fi
echo -e "${GREEN}Starting NGINX reverse proxy installation...${NC}"
# Detect distribution
echo -e "${YELLOW}[1/10] Detecting distribution...${NC}"
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_UPDATE="apt update && apt upgrade -y"
PKG_INSTALL="apt install -y"
NGINX_CONF_DIR="/etc/nginx"
NGINX_SITES_AVAILABLE="/etc/nginx/sites-available"
NGINX_SITES_ENABLED="/etc/nginx/sites-enabled"
USE_SITES_STRUCTURE=true
CERTBOT_NGINX_PLUGIN="python3-certbot-nginx"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_UPDATE="dnf update -y"
PKG_INSTALL="dnf install -y"
NGINX_CONF_DIR="/etc/nginx"
NGINX_SITES_AVAILABLE="/etc/nginx/conf.d"
NGINX_SITES_ENABLED="/etc/nginx/conf.d"
USE_SITES_STRUCTURE=false
CERTBOT_NGINX_PLUGIN="python3-certbot-nginx"
;;
amzn)
PKG_MGR="yum"
PKG_UPDATE="yum update -y"
PKG_INSTALL="yum install -y"
NGINX_CONF_DIR="/etc/nginx"
NGINX_SITES_AVAILABLE="/etc/nginx/conf.d"
NGINX_SITES_ENABLED="/etc/nginx/conf.d"
USE_SITES_STRUCTURE=false
CERTBOT_NGINX_PLUGIN="python3-certbot-nginx"
;;
*)
echo -e "${RED}Unsupported distribution: $ID${NC}"
exit 1
;;
esac
echo -e "${GREEN}Detected: $PRETTY_NAME${NC}"
else
echo -e "${RED}Error: Cannot detect distribution${NC}"
exit 1
fi
# Update system packages
echo -e "${YELLOW}[2/10] Updating system packages...${NC}"
eval $PKG_UPDATE
# Install NGINX
echo -e "${YELLOW}[3/10] Installing NGINX...${NC}"
eval "$PKG_INSTALL nginx"
# Install SSL certificate tools
echo -e "${YELLOW}[4/10] Installing SSL certificate tools...${NC}"
eval "$PKG_INSTALL certbot $CERTBOT_NGINX_PLUGIN openssl"
# Configure firewall
echo -e "${YELLOW}[5/10] Configuring firewall...${NC}"
if command -v firewall-cmd &> /dev/null; then
firewall-cmd --permanent --add-service=http
firewall-cmd --permanent --add-service=https
firewall-cmd --reload
elif command -v ufw &> /dev/null; then
ufw allow 'Nginx Full'
fi
# Create site configuration
echo -e "${YELLOW}[6/10] Creating site configuration...${NC}"
if $USE_SITES_STRUCTURE; then
NGINX_SITE_CONFIG="$NGINX_SITES_AVAILABLE/app-proxy"
NGINX_SITE_ENABLED="$NGINX_SITES_ENABLED/app-proxy"
mkdir -p "$NGINX_SITES_AVAILABLE" "$NGINX_SITES_ENABLED"
else
NGINX_SITE_CONFIG="$NGINX_SITES_AVAILABLE/app-proxy.conf"
NGINX_SITE_ENABLED="$NGINX_SITE_CONFIG"
fi
# Generate upstream configuration
UPSTREAM_CONFIG=""
IFS=',' read -ra SERVERS <<< "$BACKEND_SERVERS"
for i in "${!SERVERS[@]}"; do
server="${SERVERS[$i]}"
if [[ $i -eq $((${#SERVERS[@]}-1)) ]]; then
UPSTREAM_CONFIG+=" server $server max_fails=3 fail_timeout=30s backup;\n"
else
UPSTREAM_CONFIG+=" server $server max_fails=3 fail_timeout=30s;\n"
fi
done
# Create NGINX configuration
cat > "$NGINX_SITE_CONFIG" << EOF
upstream app_backend {
least_conn;
$UPSTREAM_CONFIG
keepalive 32;
}
server {
listen 80;
listen [::]:80;
server_name $DOMAIN www.$DOMAIN;
location /.well-known/acme-challenge/ {
root /var/www/html;
}
location / {
return 301 https://\$server_name\$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name $DOMAIN www.$DOMAIN;
# SSL configuration will be added by certbot
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Proxy configuration
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_cache_bypass \$http_upgrade;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
# Health check endpoint
location /nginx-health {
access_log off;
return 200 "healthy";
add_header Content-Type text/plain;
}
}
EOF
chmod 644 "$NGINX_SITE_CONFIG"
chown root:root "$NGINX_SITE_CONFIG"
# Enable site configuration
echo -e "${YELLOW}[7/10] Enabling site configuration...${NC}"
if $USE_SITES_STRUCTURE; then
ln -sf "$NGINX_SITE_CONFIG" "$NGINX_SITE_ENABLED"
rm -f "$NGINX_SITES_ENABLED/default"
fi
# Create optimized nginx.conf
echo -e "${YELLOW}[8/10] Optimizing NGINX configuration...${NC}"
cp "$NGINX_CONF_DIR/nginx.conf" "$NGINX_CONF_DIR/nginx.conf.backup"
cat > "$NGINX_CONF_DIR/nginx.conf" << 'EOF'
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 2048;
use epoll;
multi_accept on;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 100M;
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
EOF
if $USE_SITES_STRUCTURE; then
echo " include /etc/nginx/sites-enabled/*;" >> "$NGINX_CONF_DIR/nginx.conf"
fi
echo "}" >> "$NGINX_CONF_DIR/nginx.conf"
chmod 644 "$NGINX_CONF_DIR/nginx.conf"
chown root:root "$NGINX_CONF_DIR/nginx.conf"
# Test and start NGINX
echo -e "${YELLOW}[9/10] Testing NGINX configuration and starting service...${NC}"
nginx -t
systemctl enable nginx
systemctl start nginx
# Obtain SSL certificate
echo -e "${YELLOW}[10/10] Obtaining SSL certificate...${NC}"
mkdir -p /var/www/html
certbot --nginx -d "$DOMAIN" -d "www.$DOMAIN" --email "$EMAIL" --agree-tos --non-interactive
# Verification
echo -e "${YELLOW}Running verification checks...${NC}"
# Check NGINX is running
if systemctl is-active --quiet nginx; then
echo -e "${GREEN}✓ NGINX is running${NC}"
else
echo -e "${RED}✗ NGINX is not running${NC}"
exit 1
fi
# Check configuration syntax
if nginx -t &>/dev/null; then
echo -e "${GREEN}✓ NGINX configuration is valid${NC}"
else
echo -e "${RED}✗ NGINX configuration has errors${NC}"
exit 1
fi
# Check if port 443 is listening
if ss -tlnp | grep -q ':443'; then
echo -e "${GREEN}✓ HTTPS port is listening${NC}"
else
echo -e "${RED}✗ HTTPS port is not listening${NC}"
exit 1
fi
echo -e "${GREEN}Installation completed successfully!${NC}"
echo -e "${YELLOW}Configuration details:${NC}"
echo "- Domain: $DOMAIN"
echo "- Backend servers: $BACKEND_SERVERS"
echo "- SSL certificate email: $EMAIL"
echo "- NGINX config: $NGINX_SITE_CONFIG"
echo "- Health check endpoint: https://$DOMAIN/nginx-health"
echo ""
echo -e "${YELLOW}Next steps:${NC}"
echo "1. Update DNS records to point $DOMAIN to this server"
echo "2. Test the load balancer with: curl -k https://$DOMAIN/nginx-health"
echo "3. Monitor logs with: tail -f /var/log/nginx/access.log"
Review the script before running. Execute with: bash install.sh