Set up Varnish Cache 7 as a high-performance HTTP accelerator with NGINX backend integration. This tutorial covers installation, SSL termination, cache optimization, purging mechanisms, and monitoring for production environments.
Prerequisites
- Root or sudo access
- 2GB+ available RAM
- Basic understanding of web servers
- Domain with DNS configured
What this solves
Varnish Cache is a powerful HTTP accelerator that sits between your web server and users, dramatically reducing backend load and improving response times. By caching frequently requested content in memory, Varnish can serve thousands of requests per second while your NGINX backend handles only cache misses and dynamic content. This setup is essential for high-traffic websites that need sub-millisecond response times and can handle traffic spikes without overwhelming your origin servers.
Step-by-step installation
Update system packages
Start by updating your package manager to ensure you get the latest package information and security updates.
sudo apt update && sudo apt upgrade -y
Install Varnish Cache 7
Install Varnish Cache 7 from the official repositories. The package includes the varnishd daemon, configuration tools, and management utilities.
curl -fsSL https://packagecloud.io/varnishcache/varnish70/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/varnish-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/varnish-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish70/ubuntu/ $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/varnish.list
sudo apt update
sudo apt install -y varnish
Install and configure NGINX backend
Install NGINX to serve as the backend origin server. Varnish will proxy requests to NGINX when content is not cached.
sudo apt install -y nginx
Configure NGINX backend on port 8080
Configure NGINX to run on port 8080 as the backend server. This allows Varnish to handle port 80 and 443 for incoming requests.
server {
listen 8080;
server_name example.com www.example.com;
root /var/www/html;
index index.html index.php;
# Add cache control headers for Varnish
location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-Backend-Server "nginx";
}
location / {
try_files $uri $uri/ =404;
add_header X-Backend-Server "nginx";
}
# PHP processing (if needed)
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php-fpm.sock;
add_header X-Backend-Server "nginx";
}
}
Enable NGINX backend configuration
Link the backend configuration and restart NGINX to listen on port 8080.
sudo ln -sf /etc/nginx/sites-available/backend /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
sudo nginx -t
sudo systemctl restart nginx
Configure Varnish VCL
Create a Varnish Configuration Language (VCL) file that defines caching behavior, backend configuration, and cache policies.
vcl 4.1;
import directors;
backend nginx_backend {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 5s;
.first_byte_timeout = 30s;
.between_bytes_timeout = 5s;
.max_connections = 100;
.probe = {
.url = "/";
.timeout = 3s;
.interval = 5s;
.window = 5;
.threshold = 3;
};
}
sub vcl_init {
new backend_director = directors.round_robin();
backend_director.add_backend(nginx_backend);
}
sub vcl_recv {
set req.backend_hint = backend_director.backend();
# Remove port from host header
set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
# Handle purge requests
if (req.method == "PURGE") {
if (client.ip != "127.0.0.1" && client.ip != "::1") {
return (synth(403, "Not allowed"));
}
return (purge);
}
# Only cache GET and HEAD requests
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Don't cache requests with cookies (except specific ones)
if (req.http.Cookie) {
if (req.http.Cookie ~ "(wordpress_|wp-settings-|wp_lang|woocommerce_)" ||
req.http.Cookie ~ "(PHPSESSID|JSESSIONID)") {
return (pass);
} else {
unset req.http.Cookie;
}
}
# Don't cache admin areas
if (req.url ~ "^/(admin|wp-admin|administrator)" ||
req.url ~ "^/wp-(login|cron)") {
return (pass);
}
return (hash);
}
sub vcl_backend_response {
# Set cache TTL based on content type
if (beresp.http.Content-Type ~ "text/(html|xml)") {
set beresp.ttl = 300s; # 5 minutes for HTML
} elsif (beresp.http.Content-Type ~ "(image|css|javascript|font)") {
set beresp.ttl = 86400s; # 24 hours for static assets
}
# Don't cache error responses
if (beresp.status >= 400) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
}
# Remove unnecessary headers
unset beresp.http.Server;
unset beresp.http.X-Powered-By;
# Add cache status header
set beresp.http.X-Cache-Status = "MISS";
return (deliver);
}
sub vcl_deliver {
# Add cache hit/miss headers
if (obj.hits > 0) {
set resp.http.X-Cache-Status = "HIT";
set resp.http.X-Cache-Hits = obj.hits;
} else {
set resp.http.X-Cache-Status = "MISS";
}
# Add cache age header
set resp.http.X-Cache-Age = resp.http.Age;
# Security headers
set resp.http.X-Frame-Options = "SAMEORIGIN";
set resp.http.X-Content-Type-Options = "nosniff";
set resp.http.X-XSS-Protection = "1; mode=block";
# Remove internal headers
unset resp.http.Via;
unset resp.http.X-Varnish;
return (deliver);
}
sub vcl_hit {
# Handle purge in hit state
if (req.method == "PURGE") {
return (synth(200, "Purged from cache"));
}
}
sub vcl_miss {
# Handle purge in miss state
if (req.method == "PURGE") {
return (synth(404, "Not in cache"));
}
}
Configure Varnish daemon settings
Configure Varnish memory allocation, listening ports, and performance parameters for production use.
[Unit]
Description=Varnish HTTP accelerator
Documentation=https://www.varnish-cache.org/docs/
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
LimitNOFILE=131072
LimitMEMLOCK=82000
ExecStart=/usr/sbin/varnishd \
-a :80 \
-a :6081,HTTP \
-f /etc/varnish/default.vcl \
-s malloc,2G \
-T 127.0.0.1:6082 \
-S /etc/varnish/secret \
-p thread_pool_min=50 \
-p thread_pool_max=1000 \
-p thread_pools=4 \
-p session_linger=50 \
-p feature=+esi_ignore_https \
-p vcc_allow_inline_c=on
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Set up NGINX SSL termination frontend
Configure NGINX to handle SSL termination on port 443 and proxy requests to Varnish on port 80. This provides HTTPS support while maintaining cache performance.
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# SSL configuration
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
# Proxy to Varnish
location / {
proxy_pass http://127.0.0.1:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
# Proxy timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
# Health check endpoint
location /varnish-health {
access_log off;
proxy_pass http://127.0.0.1:6081/varnish-health;
proxy_set_header Host $host;
}
}
Redirect HTTP to HTTPS
server {
listen 8443;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
Enable SSL frontend and reload NGINX
Enable the SSL frontend configuration and test the NGINX configuration before reloading.
sudo ln -sf /etc/nginx/sites-available/ssl-frontend /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Configure firewall rules
Open the necessary ports for HTTP, HTTPS, and Varnish management. This ensures proper traffic flow while maintaining security.
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow from 127.0.0.1 to any port 6081
sudo ufw allow from 127.0.0.1 to any port 6082
sudo ufw allow from 127.0.0.1 to any port 8080
Start and enable services
Start Varnish and NGINX services and configure them to start automatically on system boot.
sudo systemctl daemon-reload
sudo systemctl enable --now varnish
sudo systemctl enable --now nginx
sudo systemctl status varnish
sudo systemctl status nginx
Create cache management scripts
Set up scripts for cache purging and monitoring to manage your Varnish cache effectively in production.
#!/bin/bash
Varnish cache purge script
if [ $# -eq 0 ]; then
echo "Usage: $0 "
echo "Example: $0 /images/logo.png"
echo "Example: $0 '.*\.css'"
exit 1
fi
URL_PATTERN="$1"
VARNISH_HOST="127.0.0.1"
VARNISH_PORT="80"
echo "Purging cache for pattern: $URL_PATTERN"
curl -X PURGE -H "Host: example.com" "http://$VARNISH_HOST:$VARNISH_PORT$URL_PATTERN"
echo
Make purge script executable
Set proper permissions on the cache purge script so it can be executed by administrators.
sudo chmod 755 /usr/local/bin/varnish-purge
Configure log rotation
Set up log rotation for Varnish logs to prevent disk space issues and maintain system performance.
/var/log/varnish/*.log {
daily
missingok
rotate 52
compress
delaycompress
copytruncate
create 644 varnishlog varnish
postrotate
systemctl reload varnish > /dev/null 2>&1 || true
endscript
}
Verify your setup
Test your Varnish cache installation and verify that caching is working correctly with these commands.
# Check service status
sudo systemctl status varnish nginx
Verify Varnish is listening on correct ports
sudo netstat -tlnp | grep varnish
Test cache hit/miss headers
curl -I http://example.com/
curl -I http://example.com/ # Second request should show X-Cache-Status: HIT
Check Varnish statistics
varnishstat -1
View cache hit ratio
varnishstat -1 -f MAIN.cache_hit -f MAIN.cache_miss
Monitor live requests
varnishlog -q 'ReqURL ~ "/"'
Test cache purge
/usr/local/bin/varnish-purge /
Verify backend health
varnishadm -T 127.0.0.1:6082 -S /etc/varnish/secret backend.list
Performance optimization
Configure memory and threading
Optimize Varnish memory allocation and threading based on your server specifications and traffic patterns.
# For servers with 8GB+ RAM, allocate 4GB to Varnish
sudo systemctl edit varnish
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd \
-a :80 \
-a :6081,HTTP \
-f /etc/varnish/default.vcl \
-s malloc,4G \
-T 127.0.0.1:6082 \
-S /etc/varnish/secret \
-p thread_pool_min=100 \
-p thread_pool_max=2000 \
-p thread_pools=6 \
-p session_linger=100 \
-p lru_interval=600 \
-p feature=+esi_ignore_https \
-p vcc_allow_inline_c=on
Add advanced caching rules
Implement sophisticated caching rules for different content types and user scenarios to maximize cache efficiency.
# Add to existing VCL file
sub vcl_recv {
# Existing rules...
# Cache API responses with shorter TTL
if (req.url ~ "^/api/") {
set req.http.X-Cache-TTL = "60s";
}
# Don't cache POST/PUT/DELETE requests
if (req.method ~ "^(POST|PUT|DELETE|PATCH)$") {
return (pass);
}
# Normalize User-Agent for better cache efficiency
if (req.http.User-Agent ~ "(bot|crawler|spider)") {
set req.http.User-Agent = "crawler";
} elsif (req.http.User-Agent ~ "Mobile") {
set req.http.User-Agent = "mobile";
} else {
set req.http.User-Agent = "desktop";
}
# Strip tracking parameters
if (req.url ~ "\?(utm_|fbclid|gclid)") {
set req.url = regsub(req.url, "\?.*", "");
}
return (hash);
}
sub vcl_backend_response {
# Existing rules...
# Cache API responses for shorter time
if (bereq.url ~ "^/api/" && beresp.status == 200) {
set beresp.ttl = 60s;
set beresp.grace = 30s;
}
# Enable grace mode for better user experience
set beresp.grace = 1h;
set beresp.keep = 24h;
return (deliver);
}
Set up monitoring dashboard
Create a simple monitoring script to track cache performance and system health.
#!/bin/bash
Varnish performance monitoring script
echo "=== Varnish Cache Statistics ==="
echo "Uptime: $(varnishstat -1 -f MAIN.uptime)"
echo "Cache Hits: $(varnishstat -1 -f MAIN.cache_hit)"
echo "Cache Misses: $(varnishstat -1 -f MAIN.cache_miss)"
echo "Hit Rate: $(varnishstat -1 -f MAIN.cache_hitpass)"
echo "Objects in Cache: $(varnishstat -1 -f MAIN.n_object)"
echo "Memory Usage: $(varnishstat -1 -f MAIN.s0.g_bytes)"
echo "Backend Connections: $(varnishstat -1 -f MAIN.backend_conn)"
echo "Client Requests: $(varnishstat -1 -f MAIN.client_req)"
echo
Calculate hit ratio
HITS=$(varnishstat -1 -f MAIN.cache_hit | cut -d' ' -f2)
MISSES=$(varnishstat -1 -f MAIN.cache_miss | cut -d' ' -f2)
TOTAL=$((HITS + MISSES))
if [ $TOTAL -gt 0 ]; then
HIT_RATIO=$(echo "scale=2; $HITS * 100 / $TOTAL" | bc -l)
echo "Cache Hit Ratio: ${HIT_RATIO}%"
fi
Make monitoring script executable
Set permissions and test the monitoring script to ensure it works correctly.
sudo chmod 755 /usr/local/bin/varnish-stats
sudo apt install -y bc # Required for calculations
varnish-stats
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| 503 Backend fetch failed | NGINX backend not responding | Check NGINX status: sudo systemctl status nginx |
| Low cache hit ratio | Too many uncacheable requests | Review VCL rules and add more caching policies |
| High memory usage | Cache size too large for available RAM | Reduce malloc size in systemd service file |
| SSL not working | Missing SSL certificates or wrong proxy config | Check cert paths and NGINX SSL frontend config |
| Cache not purging | Purge requests blocked or wrong URL | Verify purge IP allowlist in VCL and test with curl |
| High CPU usage | Too many threads or insufficient resources | Adjust thread pool settings in systemd service |
| Connection refused on port 80 | Varnish not listening or firewall blocking | Check netstat -tlnp | grep :80 and firewall rules |
| Backend health check failing | Probe URL not accessible | Update probe URL in VCL backend definition |
Security considerations
Follow these security best practices to protect your Varnish cache deployment. Secure the management interface by restricting access to localhost only and using strong authentication. Configure proper headers to prevent cache poisoning attacks and ensure sensitive content is never cached.
Consider implementing these additional security measures. Use nftables firewall rules for advanced traffic filtering and protection against DDoS attacks. Monitor your cache performance and security with centralized logging systems and set up alerts for unusual cache behavior or security events.
Next steps
- Optimize NGINX performance with microcaching and worker tuning for high-traffic websites
- Install and configure Grafana with Prometheus for system monitoring
- Configure Varnish ESI (Edge Side Includes) for dynamic content optimization
- Set up Varnish cluster with load balancing across multiple backends
- Implement Varnish cache warming with automated content preloading
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Varnish Cache 7 + NGINX Backend Installation Script
# Production-ready installation with auto-distro detection
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Default values
DOMAIN="${1:-example.com}"
NGINX_PORT="8080"
VARNISH_PORT="80"
# Usage function
usage() {
echo "Usage: $0 [DOMAIN]"
echo "Example: $0 example.com"
exit 1
}
# Logging functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Cleanup function for rollback
cleanup() {
log_error "Installation failed. Performing cleanup..."
systemctl stop varnish 2>/dev/null || true
systemctl stop nginx 2>/dev/null || true
systemctl disable varnish 2>/dev/null || true
systemctl disable nginx 2>/dev/null || true
exit 1
}
# Set trap for cleanup on error
trap cleanup ERR
# Check prerequisites
check_prerequisites() {
if [[ $EUID -ne 0 ]]; then
log_error "This script must be run as root or with sudo"
exit 1
fi
if ! command -v curl &> /dev/null; then
log_error "curl is required but not installed"
exit 1
fi
}
# Detect distribution
detect_distro() {
if [ ! -f /etc/os-release ]; then
log_error "/etc/os-release not found. Cannot detect distribution."
exit 1
fi
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_INSTALL="apt install -y"
PKG_UPDATE="apt update && apt upgrade -y"
NGINX_SITES_DIR="/etc/nginx/sites-available"
NGINX_ENABLED_DIR="/etc/nginx/sites-enabled"
NGINX_CONFIG_TYPE="sites"
;;
almalinux|rocky|centos|rhel|ol)
PKG_MGR="dnf"
PKG_INSTALL="dnf install -y"
PKG_UPDATE="dnf update -y"
NGINX_SITES_DIR="/etc/nginx/conf.d"
NGINX_ENABLED_DIR="/etc/nginx/conf.d"
NGINX_CONFIG_TYPE="conf.d"
;;
fedora)
PKG_MGR="dnf"
PKG_INSTALL="dnf install -y"
PKG_UPDATE="dnf update -y"
NGINX_SITES_DIR="/etc/nginx/conf.d"
NGINX_ENABLED_DIR="/etc/nginx/conf.d"
NGINX_CONFIG_TYPE="conf.d"
;;
amzn)
PKG_MGR="yum"
PKG_INSTALL="yum install -y"
PKG_UPDATE="yum update -y"
NGINX_SITES_DIR="/etc/nginx/conf.d"
NGINX_ENABLED_DIR="/etc/nginx/conf.d"
NGINX_CONFIG_TYPE="conf.d"
;;
*)
log_error "Unsupported distribution: $ID"
exit 1
;;
esac
log_success "Detected distribution: $PRETTY_NAME"
}
# Update system packages
update_system() {
echo "[1/8] Updating system packages..."
$PKG_UPDATE
log_success "System packages updated"
}
# Install Varnish Cache 7
install_varnish() {
echo "[2/8] Installing Varnish Cache 7..."
if [[ "$PKG_MGR" == "apt" ]]; then
curl -fsSL https://packagecloud.io/varnishcache/varnish70/gpgkey | gpg --dearmor -o /usr/share/keyrings/varnish-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/varnish-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish70/ubuntu/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/varnish.list
apt update
$PKG_INSTALL varnish
else
if [[ "$PKG_MGR" == "yum" ]]; then
$PKG_INSTALL epel-release
else
$PKG_INSTALL epel-release
fi
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish70/script.rpm.sh | bash
$PKG_INSTALL varnish
fi
log_success "Varnish Cache 7 installed"
}
# Install NGINX
install_nginx() {
echo "[3/8] Installing NGINX..."
$PKG_INSTALL nginx
log_success "NGINX installed"
}
# Configure NGINX backend
configure_nginx() {
echo "[4/8] Configuring NGINX backend on port $NGINX_PORT..."
if [[ "$NGINX_CONFIG_TYPE" == "sites" ]]; then
CONFIG_FILE="$NGINX_SITES_DIR/backend"
else
CONFIG_FILE="$NGINX_SITES_DIR/backend.conf"
fi
cat > "$CONFIG_FILE" << EOF
server {
listen $NGINX_PORT;
server_name $DOMAIN www.$DOMAIN;
root /var/www/html;
index index.html index.php;
# Add cache control headers for Varnish
location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-Backend-Server "nginx";
}
location / {
try_files \$uri \$uri/ =404;
add_header X-Backend-Server "nginx";
}
# PHP processing (if needed)
location ~ \.php\$ {
include fastcgi_params;
fastcgi_pass unix:/var/run/php/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name;
add_header X-Backend-Server "nginx";
}
}
EOF
chmod 644 "$CONFIG_FILE"
chown root:root "$CONFIG_FILE"
# Enable configuration based on distro
if [[ "$NGINX_CONFIG_TYPE" == "sites" ]]; then
ln -sf "$CONFIG_FILE" "$NGINX_ENABLED_DIR/"
rm -f "$NGINX_ENABLED_DIR/default"
fi
# Test NGINX configuration
nginx -t
systemctl enable nginx
systemctl restart nginx
log_success "NGINX backend configured on port $NGINX_PORT"
}
# Configure Varnish VCL
configure_varnish() {
echo "[5/8] Configuring Varnish VCL..."
cat > /etc/varnish/default.vcl << 'EOF'
vcl 4.1;
import directors;
backend nginx_backend {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 5s;
.first_byte_timeout = 30s;
.between_bytes_timeout = 5s;
.max_connections = 100;
.probe = {
.url = "/";
.timeout = 3s;
.interval = 5s;
.window = 5;
.threshold = 3;
};
}
sub vcl_init {
new backend_director = directors.round_robin();
backend_director.add_backend(nginx_backend);
}
sub vcl_recv {
set req.backend_hint = backend_director.backend();
# Remove port from host header
set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
# Handle purge requests
if (req.method == "PURGE") {
if (client.ip != "127.0.0.1" && client.ip != "::1") {
return (synth(403, "Not allowed"));
}
return (purge);
}
# Only cache GET and HEAD requests
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Remove cookies for static assets
if (req.url ~ "\.(css|js|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$") {
unset req.http.Cookie;
}
return (hash);
}
sub vcl_backend_response {
# Cache static assets for 1 hour
if (bereq.url ~ "\.(css|js|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$") {
set beresp.ttl = 1h;
set beresp.http.Cache-Control = "public, max-age=3600";
}
# Cache HTML for 5 minutes
if (beresp.http.Content-Type ~ "text/html") {
set beresp.ttl = 5m;
}
return (deliver);
}
sub vcl_deliver {
# Add cache status header
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
set resp.http.X-Cache-Hits = obj.hits;
} else {
set resp.http.X-Cache = "MISS";
}
# Remove backend server info in production
unset resp.http.X-Varnish;
unset resp.http.Server;
unset resp.http.X-Powered-By;
return (deliver);
}
EOF
chmod 644 /etc/varnish/default.vcl
chown root:root /etc/varnish/default.vcl
log_success "Varnish VCL configured"
}
# Configure Varnish daemon
configure_varnish_daemon() {
echo "[6/8] Configuring Varnish daemon..."
# Create systemd override directory
mkdir -p /etc/systemd/system/varnish.service.d/
cat > /etc/systemd/system/varnish.service.d/override.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -a :$VARNISH_PORT -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
EOF
chmod 644 /etc/systemd/system/varnish.service.d/override.conf
chown root:root /etc/systemd/system/varnish.service.d/override.conf
systemctl daemon-reload
systemctl enable varnish
systemctl start varnish
log_success "Varnish daemon configured and started"
}
# Configure firewall
configure_firewall() {
echo "[7/8] Configuring firewall..."
if command -v ufw &> /dev/null; then
ufw allow $VARNISH_PORT/tcp
ufw allow $NGINX_PORT/tcp
log_success "UFW firewall rules added"
elif command -v firewall-cmd &> /dev/null; then
firewall-cmd --permanent --add-port=$VARNISH_PORT/tcp
firewall-cmd --permanent --add-port=$NGINX_PORT/tcp
firewall-cmd --reload
log_success "Firewalld rules added"
else
log_warning "No supported firewall found. Please configure manually."
fi
}
# Verify installation
verify_installation() {
echo "[8/8] Verifying installation..."
# Check services are running
if systemctl is-active --quiet nginx; then
log_success "NGINX is running"
else
log_error "NGINX is not running"
return 1
fi
if systemctl is-active --quiet varnish; then
log_success "Varnish is running"
else
log_error "Varnish is not running"
return 1
fi
# Test backend connectivity
if curl -s -f "http://127.0.0.1:$NGINX_PORT" > /dev/null; then
log_success "NGINX backend is responding"
else
log_error "NGINX backend is not responding"
return 1
fi
# Test Varnish connectivity
if curl -s -f "http://127.0.0.1:$VARNISH_PORT" > /dev/null; then
log_success "Varnish is responding"
else
log_error "Varnish is not responding"
return 1
fi
log_success "Installation completed successfully!"
echo ""
log_info "Varnish Cache is running on port $VARNISH_PORT"
log_info "NGINX backend is running on port $NGINX_PORT"
log_info "Test with: curl -I http://localhost"
log_info "Varnish admin: varnishadm -T localhost:6082"
}
# Main execution
main() {
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
fi
log_info "Starting Varnish Cache 7 + NGINX installation for domain: $DOMAIN"
check_prerequisites
detect_distro
update_system
install_varnish
install_nginx
configure_nginx
configure_varnish
configure_varnish_daemon
configure_firewall
verify_installation
}
main "$@"
Review the script before running. Execute with: bash install.sh