Optimize Nginx caching performance with Redis backend and memory tuning for high-traffic websites

Intermediate 45 min Apr 03, 2026 21 views
Ubuntu 24.04 Ubuntu 22.04 Debian 12 AlmaLinux 9 Rocky Linux 9 Fedora 41

Supercharge your web application performance by integrating Redis as a caching backend for Nginx using lua-resty-redis. This tutorial covers Redis memory optimization, Nginx worker tuning, and comprehensive performance testing for production environments.

Prerequisites

  • Root or sudo access
  • At least 4GB RAM
  • Basic understanding of web server concepts

What this solves

High-traffic websites often struggle with slow response times and server overload due to repeated database queries and file system access. This tutorial shows you how to implement Redis as a high-performance caching backend for Nginx using Lua scripting, reducing response times from hundreds of milliseconds to single digits. You'll also optimize Redis memory usage and tune Nginx worker processes for maximum throughput.

Step-by-step installation

Update system packages and install dependencies

Start by updating your package manager and installing the required packages for Nginx, Redis, and Lua integration.

sudo apt update && sudo apt upgrade -y
sudo apt install -y nginx redis-server lua-cjson libnginx-mod-http-lua build-essential git
sudo dnf update -y
sudo dnf install -y nginx redis epel-release
sudo dnf install -y nginx-mod-http-lua lua-cjson git gcc make

Install lua-resty-redis library

Download and install the lua-resty-redis library that enables Nginx to communicate with Redis servers efficiently.

cd /tmp
git clone https://github.com/openresty/lua-resty-redis.git
cd lua-resty-redis
sudo mkdir -p /usr/local/share/lua/5.1/resty
sudo cp lib/resty/redis.lua /usr/local/share/lua/5.1/resty/
sudo chown root:root /usr/local/share/lua/5.1/resty/redis.lua
sudo chmod 644 /usr/local/share/lua/5.1/resty/redis.lua

Configure Redis for optimal caching performance

Optimize Redis configuration for high-performance caching with appropriate memory limits and eviction policies.

# Memory optimization
maxmemory 2gb
maxmemory-policy allkeys-lru

Performance tuning

tcp-keepalive 60 timeout 300 tcp-backlog 511

Persistence settings for cache

save "" appendonly no

Network optimization

bind 127.0.0.1 port 6379 protected-mode yes

Logging

loglevel notice logfile /var/log/redis/redis-server.log

Client output buffer limits

client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60

Configure Nginx with Lua and Redis integration

Set up Nginx with optimized worker processes and Lua path configuration for Redis integration.

user www-data;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /run/nginx.pid;

Load Lua module

load_module modules/ndk_http_module.so; load_module modules/ngx_http_lua_module.so; events { worker_connections 4096; use epoll; multi_accept on; } http { # Lua package path lua_package_path "/usr/local/share/lua/5.1/?.lua;;"; # Redis connection pool lua_shared_dict redis_cache 10m; # Basic settings sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; keepalive_requests 1000; types_hash_max_size 2048; client_max_body_size 16M; # Gzip compression gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/atom+xml; include /etc/nginx/mime.types; default_type application/octet-stream; # Logging log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" rt=$request_time ut="$upstream_response_time"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log warn; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

Create Redis caching virtual host

Configure a virtual host that implements Redis caching with automatic cache invalidation and fallback mechanisms.

upstream backend {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name example.com www.example.com;
    
    # Cache configuration
    set $cache_key "$scheme$request_method$host$request_uri";
    set $cache_ttl 3600;
    
    location / {
        # Try Redis cache first
        access_by_lua_block {
            local redis = require "resty.redis"
            local red = redis:new()
            
            red:set_timeout(1000)
            
            local ok, err = red:connect("127.0.0.1", 6379)
            if not ok then
                ngx.log(ngx.ERR, "Failed to connect to Redis: ", err)
                return
            end
            
            local cache_key = ngx.var.cache_key
            local res, err = red:get(cache_key)
            
            if res and res ~= ngx.null then
                ngx.header["X-Cache"] = "HIT"
                ngx.header["Content-Type"] = "text/html"
                ngx.say(res)
                ngx.exit(200)
            end
            
            red:set_keepalive(10000, 100)
        }
        
        # Cache miss - proxy to backend
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Cache response in Redis
        body_filter_by_lua_block {
            if ngx.status == 200 then
                local redis = require "resty.redis"
                local red = redis:new()
                
                red:set_timeout(1000)
                
                local ok, err = red:connect("127.0.0.1", 6379)
                if not ok then
                    ngx.log(ngx.ERR, "Failed to connect to Redis: ", err)
                    return
                end
                
                local cache_key = ngx.var.cache_key
                local cache_ttl = tonumber(ngx.var.cache_ttl)
                
                -- Get response body
                local body = ngx.arg[1]
                if body then
                    red:setex(cache_key, cache_ttl, body)
                    ngx.header["X-Cache"] = "MISS"
                end
                
                red:set_keepalive(10000, 100)
            end
        }
    }
    
    # Cache invalidation endpoint
    location /cache/purge {
        allow 127.0.0.1;
        deny all;
        
        content_by_lua_block {
            local redis = require "resty.redis"
            local red = redis:new()
            
            red:set_timeout(1000)
            
            local ok, err = red:connect("127.0.0.1", 6379)
            if not ok then
                ngx.status = 500
                ngx.say("Failed to connect to Redis: ", err)
                return
            end
            
            local pattern = ngx.var.arg_pattern or "*"
            local keys, err = red:keys(pattern)
            
            if keys then
                for i, key in ipairs(keys) do
                    red:del(key)
                end
                ngx.say("Purged ", #keys, " cache entries")
            else
                ngx.status = 500
                ngx.say("Error: ", err)
            end
            
            red:set_keepalive(10000, 100)
        }
    }
    
    # Health check endpoint
    location /health {
        access_log off;
        return 200 "OK\n";
        add_header Content-Type text/plain;
    }
}

Optimize system kernel parameters

Configure kernel parameters for high-performance networking and memory management.

# Network optimization
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 3

Memory optimization

vm.swappiness = 1 vm.overcommit_memory = 1

File descriptor limits

fs.file-max = 2097152

Configure system limits for high performance

Set appropriate file descriptor and process limits for Nginx and Redis processes.

www-data soft nofile 65535
www-data hard nofile 65535
www-data soft nproc 32768
www-data hard nproc 32768

redis soft nofile 65535
redis hard nofile 65535
redis soft nproc 32768
redis hard nproc 32768

Enable and start services

Enable both Redis and Nginx services and apply the new configuration settings.

sudo sysctl -p /etc/sysctl.d/99-nginx-redis.conf
sudo systemctl enable redis-server
sudo systemctl restart redis-server
sudo ln -s /etc/nginx/sites-available/redis-cache /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl enable nginx
sudo systemctl restart nginx

Create performance monitoring script

Set up a monitoring script to track cache hit rates and performance metrics.

#!/bin/bash

Redis and Nginx cache performance monitoring

echo "=== Redis Cache Statistics ===" redis-cli info stats | grep -E "(keyspace_hits|keyspace_misses|used_memory_human)" echo "\n=== Cache Hit Rate ===" HITS=$(redis-cli info stats | grep keyspace_hits | cut -d: -f2 | tr -d '\r') MISSES=$(redis-cli info stats | grep keyspace_misses | cut -d: -f2 | tr -d '\r') TOTAL=$((HITS + MISSES)) if [ $TOTAL -gt 0 ]; then HIT_RATE=$(echo "scale=2; $HITS * 100 / $TOTAL" | bc) echo "Hit Rate: ${HIT_RATE}%" else echo "Hit Rate: No data available" fi echo "\n=== Redis Memory Usage ===" redis-cli info memory | grep -E "(used_memory_human|maxmemory_human)" echo "\n=== Active Connections ===" ss -tuln | grep -E ":(80|6379)" | wc -l
sudo chmod +x /usr/local/bin/redis-cache-stats.sh

Performance testing and benchmarking

Install Apache Bench for testing

Install benchmarking tools to measure cache performance improvements.

sudo apt install -y apache2-utils bc
sudo dnf install -y httpd-tools bc

Run cache performance tests

Execute benchmark tests to measure cache performance and response times.

# Test cache miss (first request)
ab -n 1000 -c 10 http://example.com/

Test cache hit (subsequent requests)

ab -n 1000 -c 50 http://example.com/

Monitor cache statistics

/usr/local/bin/redis-cache-stats.sh

Verify your setup

Confirm that Redis caching is working correctly and performance is optimized.

# Check service status
sudo systemctl status nginx redis-server

Test Redis connection

redis-cli ping

Test cache functionality

curl -H "Host: example.com" http://localhost/ curl -H "Host: example.com" http://localhost/ -H "X-Cache: check"

Verify cache entries

redis-cli keys "*"

Check Nginx configuration

sudo nginx -t

Monitor performance

/usr/local/bin/redis-cache-stats.sh

Test cache purge

curl "http://localhost/cache/purge?pattern=*"
Note: For production use, replace example.com with your actual domain and configure SSL certificates. You should also set up proper authentication for the cache purge endpoint.

Common issues

SymptomCauseFix
Lua module not foundMissing nginx-lua moduleInstall libnginx-mod-http-lua package and restart nginx
Redis connection failedRedis not running or wrong portCheck redis service: sudo systemctl status redis-server
Cache not workingLua script errorsCheck nginx error log: sudo tail -f /var/log/nginx/error.log
High memory usageNo maxmemory limit setConfigure maxmemory and eviction policy in redis.conf
Low cache hit rateShort TTL or frequent purgesIncrease cache_ttl value and optimize cache keys
Performance not improvedBackend still slowProfile backend application and database queries

Next steps

Automated install script

Run this to automate the entire setup

#nginx #redis #caching #performance #lua

Need help?

Don't want to manage this yourself?

We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.

Talk to an engineer