Install and configure Varnish Cache 7 with NGINX backend for high-performance web acceleration

Intermediate 45 min Apr 03, 2026 20 views
Ubuntu 24.04 Ubuntu 22.04 Debian 12 AlmaLinux 9 Rocky Linux 9 Fedora 41

Set up Varnish Cache 7 as a high-performance HTTP accelerator with NGINX backend integration. This tutorial covers installation, SSL termination, cache optimization, purging mechanisms, and monitoring for production environments.

Prerequisites

  • Root or sudo access
  • 2GB+ available RAM
  • Basic understanding of web servers
  • Domain with DNS configured

What this solves

Varnish Cache is a powerful HTTP accelerator that sits between your web server and users, dramatically reducing backend load and improving response times. By caching frequently requested content in memory, Varnish can serve thousands of requests per second while your NGINX backend handles only cache misses and dynamic content. This setup is essential for high-traffic websites that need sub-millisecond response times and can handle traffic spikes without overwhelming your origin servers.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you get the latest package information and security updates.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install Varnish Cache 7

Install Varnish Cache 7 from the official repositories. The package includes the varnishd daemon, configuration tools, and management utilities.

curl -fsSL https://packagecloud.io/varnishcache/varnish70/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/varnish-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/varnish-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish70/ubuntu/ $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/varnish.list
sudo apt update
sudo apt install -y varnish
sudo dnf install -y epel-release
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish70/script.rpm.sh | sudo bash
sudo dnf install -y varnish

Install and configure NGINX backend

Install NGINX to serve as the backend origin server. Varnish will proxy requests to NGINX when content is not cached.

sudo apt install -y nginx
sudo dnf install -y nginx

Configure NGINX backend on port 8080

Configure NGINX to run on port 8080 as the backend server. This allows Varnish to handle port 80 and 443 for incoming requests.

server {
    listen 8080;
    server_name example.com www.example.com;
    root /var/www/html;
    index index.html index.php;

    # Add cache control headers for Varnish
    location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        add_header X-Backend-Server "nginx";
    }

    location / {
        try_files $uri $uri/ =404;
        add_header X-Backend-Server "nginx";
    }

    # PHP processing (if needed)
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php/php-fpm.sock;
        add_header X-Backend-Server "nginx";
    }
}

Enable NGINX backend configuration

Link the backend configuration and restart NGINX to listen on port 8080.

sudo ln -sf /etc/nginx/sites-available/backend /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
sudo nginx -t
sudo systemctl restart nginx

Configure Varnish VCL

Create a Varnish Configuration Language (VCL) file that defines caching behavior, backend configuration, and cache policies.

vcl 4.1;

import directors;

backend nginx_backend {
    .host = "127.0.0.1";
    .port = "8080";
    .connect_timeout = 5s;
    .first_byte_timeout = 30s;
    .between_bytes_timeout = 5s;
    .max_connections = 100;
    .probe = {
        .url = "/";
        .timeout = 3s;
        .interval = 5s;
        .window = 5;
        .threshold = 3;
    };
}

sub vcl_init {
    new backend_director = directors.round_robin();
    backend_director.add_backend(nginx_backend);
}

sub vcl_recv {
    set req.backend_hint = backend_director.backend();
    
    # Remove port from host header
    set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
    
    # Handle purge requests
    if (req.method == "PURGE") {
        if (client.ip != "127.0.0.1" && client.ip != "::1") {
            return (synth(403, "Not allowed"));
        }
        return (purge);
    }
    
    # Only cache GET and HEAD requests
    if (req.method != "GET" && req.method != "HEAD") {
        return (pass);
    }
    
    # Don't cache requests with cookies (except specific ones)
    if (req.http.Cookie) {
        if (req.http.Cookie ~ "(wordpress_|wp-settings-|wp_lang|woocommerce_)" ||
            req.http.Cookie ~ "(PHPSESSID|JSESSIONID)") {
            return (pass);
        } else {
            unset req.http.Cookie;
        }
    }
    
    # Don't cache admin areas
    if (req.url ~ "^/(admin|wp-admin|administrator)" ||
        req.url ~ "^/wp-(login|cron)") {
        return (pass);
    }
    
    return (hash);
}

sub vcl_backend_response {
    # Set cache TTL based on content type
    if (beresp.http.Content-Type ~ "text/(html|xml)") {
        set beresp.ttl = 300s;  # 5 minutes for HTML
    } elsif (beresp.http.Content-Type ~ "(image|css|javascript|font)") {
        set beresp.ttl = 86400s; # 24 hours for static assets
    }
    
    # Don't cache error responses
    if (beresp.status >= 400) {
        set beresp.ttl = 0s;
        set beresp.uncacheable = true;
        return (deliver);
    }
    
    # Remove unnecessary headers
    unset beresp.http.Server;
    unset beresp.http.X-Powered-By;
    
    # Add cache status header
    set beresp.http.X-Cache-Status = "MISS";
    
    return (deliver);
}

sub vcl_deliver {
    # Add cache hit/miss headers
    if (obj.hits > 0) {
        set resp.http.X-Cache-Status = "HIT";
        set resp.http.X-Cache-Hits = obj.hits;
    } else {
        set resp.http.X-Cache-Status = "MISS";
    }
    
    # Add cache age header
    set resp.http.X-Cache-Age = resp.http.Age;
    
    # Security headers
    set resp.http.X-Frame-Options = "SAMEORIGIN";
    set resp.http.X-Content-Type-Options = "nosniff";
    set resp.http.X-XSS-Protection = "1; mode=block";
    
    # Remove internal headers
    unset resp.http.Via;
    unset resp.http.X-Varnish;
    
    return (deliver);
}

sub vcl_hit {
    # Handle purge in hit state
    if (req.method == "PURGE") {
        return (synth(200, "Purged from cache"));
    }
}

sub vcl_miss {
    # Handle purge in miss state  
    if (req.method == "PURGE") {
        return (synth(404, "Not in cache"));
    }
}

Configure Varnish daemon settings

Configure Varnish memory allocation, listening ports, and performance parameters for production use.

[Unit]
Description=Varnish HTTP accelerator
Documentation=https://www.varnish-cache.org/docs/
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
LimitNOFILE=131072
LimitMEMLOCK=82000
ExecStart=/usr/sbin/varnishd \
    -a :80 \
    -a :6081,HTTP \
    -f /etc/varnish/default.vcl \
    -s malloc,2G \
    -T 127.0.0.1:6082 \
    -S /etc/varnish/secret \
    -p thread_pool_min=50 \
    -p thread_pool_max=1000 \
    -p thread_pools=4 \
    -p session_linger=50 \
    -p feature=+esi_ignore_https \
    -p vcc_allow_inline_c=on
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Set up NGINX SSL termination frontend

Configure NGINX to handle SSL termination on port 443 and proxy requests to Varnish on port 80. This provides HTTPS support while maintaining cache performance.

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;
    
    # SSL configuration
    ssl_certificate /etc/ssl/certs/example.com.crt;
    ssl_certificate_key /etc/ssl/private/example.com.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    
    # Proxy to Varnish
    location / {
        proxy_pass http://127.0.0.1:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        
        # Proxy timeouts
        proxy_connect_timeout 5s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
        
        # Buffer settings
        proxy_buffering on;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
    
    # Health check endpoint
    location /varnish-health {
        access_log off;
        proxy_pass http://127.0.0.1:6081/varnish-health;
        proxy_set_header Host $host;
    }
}

Redirect HTTP to HTTPS

server { listen 8443; server_name example.com www.example.com; return 301 https://$server_name$request_uri; }

Enable SSL frontend and reload NGINX

Enable the SSL frontend configuration and test the NGINX configuration before reloading.

sudo ln -sf /etc/nginx/sites-available/ssl-frontend /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

Configure firewall rules

Open the necessary ports for HTTP, HTTPS, and Varnish management. This ensures proper traffic flow while maintaining security.

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow from 127.0.0.1 to any port 6081
sudo ufw allow from 127.0.0.1 to any port 6082
sudo ufw allow from 127.0.0.1 to any port 8080
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --permanent --add-port=443/tcp
sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='127.0.0.1' port port='6081' protocol='tcp' accept"
sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='127.0.0.1' port port='6082' protocol='tcp' accept"
sudo firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='127.0.0.1' port port='8080' protocol='tcp' accept"
sudo firewall-cmd --reload

Start and enable services

Start Varnish and NGINX services and configure them to start automatically on system boot.

sudo systemctl daemon-reload
sudo systemctl enable --now varnish
sudo systemctl enable --now nginx
sudo systemctl status varnish
sudo systemctl status nginx

Create cache management scripts

Set up scripts for cache purging and monitoring to manage your Varnish cache effectively in production.

#!/bin/bash

Varnish cache purge script

if [ $# -eq 0 ]; then echo "Usage: $0 " echo "Example: $0 /images/logo.png" echo "Example: $0 '.*\.css'" exit 1 fi URL_PATTERN="$1" VARNISH_HOST="127.0.0.1" VARNISH_PORT="80" echo "Purging cache for pattern: $URL_PATTERN" curl -X PURGE -H "Host: example.com" "http://$VARNISH_HOST:$VARNISH_PORT$URL_PATTERN" echo

Make purge script executable

Set proper permissions on the cache purge script so it can be executed by administrators.

sudo chmod 755 /usr/local/bin/varnish-purge

Configure log rotation

Set up log rotation for Varnish logs to prevent disk space issues and maintain system performance.

/var/log/varnish/*.log {
    daily
    missingok
    rotate 52
    compress
    delaycompress
    copytruncate
    create 644 varnishlog varnish
    postrotate
        systemctl reload varnish > /dev/null 2>&1 || true
    endscript
}

Verify your setup

Test your Varnish cache installation and verify that caching is working correctly with these commands.

# Check service status
sudo systemctl status varnish nginx

Verify Varnish is listening on correct ports

sudo netstat -tlnp | grep varnish

Test cache hit/miss headers

curl -I http://example.com/ curl -I http://example.com/ # Second request should show X-Cache-Status: HIT

Check Varnish statistics

varnishstat -1

View cache hit ratio

varnishstat -1 -f MAIN.cache_hit -f MAIN.cache_miss

Monitor live requests

varnishlog -q 'ReqURL ~ "/"'

Test cache purge

/usr/local/bin/varnish-purge /

Verify backend health

varnishadm -T 127.0.0.1:6082 -S /etc/varnish/secret backend.list

Performance optimization

Configure memory and threading

Optimize Varnish memory allocation and threading based on your server specifications and traffic patterns.

# For servers with 8GB+ RAM, allocate 4GB to Varnish
sudo systemctl edit varnish
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd \
    -a :80 \
    -a :6081,HTTP \
    -f /etc/varnish/default.vcl \
    -s malloc,4G \
    -T 127.0.0.1:6082 \
    -S /etc/varnish/secret \
    -p thread_pool_min=100 \
    -p thread_pool_max=2000 \
    -p thread_pools=6 \
    -p session_linger=100 \
    -p lru_interval=600 \
    -p feature=+esi_ignore_https \
    -p vcc_allow_inline_c=on

Add advanced caching rules

Implement sophisticated caching rules for different content types and user scenarios to maximize cache efficiency.

# Add to existing VCL file

sub vcl_recv {
    # Existing rules...
    
    # Cache API responses with shorter TTL
    if (req.url ~ "^/api/") {
        set req.http.X-Cache-TTL = "60s";
    }
    
    # Don't cache POST/PUT/DELETE requests
    if (req.method ~ "^(POST|PUT|DELETE|PATCH)$") {
        return (pass);
    }
    
    # Normalize User-Agent for better cache efficiency
    if (req.http.User-Agent ~ "(bot|crawler|spider)") {
        set req.http.User-Agent = "crawler";
    } elsif (req.http.User-Agent ~ "Mobile") {
        set req.http.User-Agent = "mobile";
    } else {
        set req.http.User-Agent = "desktop";
    }
    
    # Strip tracking parameters
    if (req.url ~ "\?(utm_|fbclid|gclid)") {
        set req.url = regsub(req.url, "\?.*", "");
    }
    
    return (hash);
}

sub vcl_backend_response {
    # Existing rules...
    
    # Cache API responses for shorter time
    if (bereq.url ~ "^/api/" && beresp.status == 200) {
        set beresp.ttl = 60s;
        set beresp.grace = 30s;
    }
    
    # Enable grace mode for better user experience
    set beresp.grace = 1h;
    set beresp.keep = 24h;
    
    return (deliver);
}

Set up monitoring dashboard

Create a simple monitoring script to track cache performance and system health.

#!/bin/bash

Varnish performance monitoring script

echo "=== Varnish Cache Statistics ===" echo "Uptime: $(varnishstat -1 -f MAIN.uptime)" echo "Cache Hits: $(varnishstat -1 -f MAIN.cache_hit)" echo "Cache Misses: $(varnishstat -1 -f MAIN.cache_miss)" echo "Hit Rate: $(varnishstat -1 -f MAIN.cache_hitpass)" echo "Objects in Cache: $(varnishstat -1 -f MAIN.n_object)" echo "Memory Usage: $(varnishstat -1 -f MAIN.s0.g_bytes)" echo "Backend Connections: $(varnishstat -1 -f MAIN.backend_conn)" echo "Client Requests: $(varnishstat -1 -f MAIN.client_req)" echo

Calculate hit ratio

HITS=$(varnishstat -1 -f MAIN.cache_hit | cut -d' ' -f2) MISSES=$(varnishstat -1 -f MAIN.cache_miss | cut -d' ' -f2) TOTAL=$((HITS + MISSES)) if [ $TOTAL -gt 0 ]; then HIT_RATIO=$(echo "scale=2; $HITS * 100 / $TOTAL" | bc -l) echo "Cache Hit Ratio: ${HIT_RATIO}%" fi

Make monitoring script executable

Set permissions and test the monitoring script to ensure it works correctly.

sudo chmod 755 /usr/local/bin/varnish-stats
sudo apt install -y bc  # Required for calculations
varnish-stats

Common issues

SymptomCauseFix
503 Backend fetch failedNGINX backend not respondingCheck NGINX status: sudo systemctl status nginx
Low cache hit ratioToo many uncacheable requestsReview VCL rules and add more caching policies
High memory usageCache size too large for available RAMReduce malloc size in systemd service file
SSL not workingMissing SSL certificates or wrong proxy configCheck cert paths and NGINX SSL frontend config
Cache not purgingPurge requests blocked or wrong URLVerify purge IP allowlist in VCL and test with curl
High CPU usageToo many threads or insufficient resourcesAdjust thread pool settings in systemd service
Connection refused on port 80Varnish not listening or firewall blockingCheck netstat -tlnp | grep :80 and firewall rules
Backend health check failingProbe URL not accessibleUpdate probe URL in VCL backend definition

Security considerations

Follow these security best practices to protect your Varnish cache deployment. Secure the management interface by restricting access to localhost only and using strong authentication. Configure proper headers to prevent cache poisoning attacks and ensure sensitive content is never cached.

Security Warning: Never expose Varnish management ports (6081, 6082) to public networks. Always restrict purge requests to trusted IP addresses and implement proper authentication for cache management operations.

Consider implementing these additional security measures. Use nftables firewall rules for advanced traffic filtering and protection against DDoS attacks. Monitor your cache performance and security with centralized logging systems and set up alerts for unusual cache behavior or security events.

Next steps

Automated install script

Run this to automate the entire setup

#varnish #cache #nginx #web-acceleration #performance

Need help?

Don't want to manage this yourself?

We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.

Talk to an engineer