Configure Deno application clustering with load balancing for high availability

Intermediate 45 min Apr 12, 2026 12 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up Deno application clustering with worker threads and HAProxy load balancing for production-grade high availability deployments. Configure health checks, monitoring, and automatic failover mechanisms.

Prerequisites

  • Root or sudo access
  • 4GB+ RAM recommended
  • Basic understanding of HTTP and load balancing
  • Familiarity with systemd services

What this solves

Deno applications running as single processes can become bottlenecks under high load and create single points of failure in production environments. This tutorial shows you how to configure Deno clustering with worker threads and HAProxy load balancing to distribute traffic across multiple application instances, ensuring high availability and improved performance.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you have the latest security updates and package versions.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install Deno runtime

Install the latest Deno runtime using the official installer. This provides the runtime environment for your clustered applications.

curl -fsSL https://deno.land/install.sh | sh
echo 'export DENO_INSTALL="$HOME/.deno"' >> ~/.bashrc
echo 'export PATH="$DENO_INSTALL/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
deno --version

Install HAProxy load balancer

Install HAProxy to distribute incoming requests across multiple Deno worker processes with health checking capabilities.

sudo apt install -y haproxy
sudo dnf install -y haproxy

Create Deno application directory

Set up the directory structure for your clustered Deno application with proper ownership and permissions.

sudo mkdir -p /opt/deno-cluster
sudo useradd --system --no-create-home --shell /bin/false deno
sudo chown -R deno:deno /opt/deno-cluster
sudo chmod 755 /opt/deno-cluster

Create main application server

Create the main Deno HTTP server that will handle incoming requests. This server includes basic health check endpoints and request logging.

import { serve } from "https://deno.land/std@0.208.0/http/server.ts";

const PORT = parseInt(Deno.env.get("PORT") || "8000");
const WORKER_ID = Deno.env.get("WORKER_ID") || "main";

const handler = async (request: Request): Promise => {
  const url = new URL(request.url);
  const startTime = Date.now();
  
  // Health check endpoint for HAProxy
  if (url.pathname === "/health") {
    return new Response(JSON.stringify({
      status: "healthy",
      worker: WORKER_ID,
      timestamp: new Date().toISOString(),
      uptime: performance.now()
    }), {
      headers: { "content-type": "application/json" },
      status: 200
    });
  }
  
  // Status endpoint with worker information
  if (url.pathname === "/status") {
    return new Response(JSON.stringify({
      worker: WORKER_ID,
      port: PORT,
      memory: Deno.memoryUsage(),
      timestamp: new Date().toISOString()
    }), {
      headers: { "content-type": "application/json" }
    });
  }
  
  // Main application logic
  const responseTime = Date.now() - startTime;
  const response = Hello from Deno Worker ${WORKER_ID} on port ${PORT}\nRequest processed in ${responseTime}ms\nPath: ${url.pathname};
  
  console.log([${WORKER_ID}] ${request.method} ${url.pathname} - ${responseTime}ms);
  
  return new Response(response, {
    headers: { "content-type": "text/plain" }
  });
};

const server = serve(handler, { port: PORT });
console.log(Worker ${WORKER_ID} listening on port ${PORT});

// Graceful shutdown handling
Deno.addSignalListener("SIGTERM", () => {
  console.log(Worker ${WORKER_ID} shutting down...);
  server.shutdown();
});

Create cluster manager script

Create a cluster manager that spawns multiple Deno worker processes using the Web Workers API for true parallelism.

interface WorkerConfig {
  id: string;
  port: number;
  worker?: Worker;
}

const WORKER_COUNT = parseInt(Deno.env.get("WORKER_COUNT") || "4");
const BASE_PORT = 8001;
const workers: WorkerConfig[] = [];

// Initialize worker configurations
for (let i = 0; i < WORKER_COUNT; i++) {
  workers.push({
    id: worker-${i + 1},
    port: BASE_PORT + i
  });
}

// Worker script for subprocess execution
const workerScript = `
self.onmessage = async (e) => {
  const { workerId, port } = e.data;
  
  // Set environment variables for the worker
  Deno.env.set("WORKER_ID", workerId);
  Deno.env.set("PORT", port.toString());
  
  try {
    // Import and run the server
    await import("./server.ts");
  } catch (error) {
    console.error(\Worker \${workerId} failed to start:\, error);
    self.postMessage({ type: "error", workerId, error: error.message });
  }
};
`;

// Function to start a worker
const startWorker = (config: WorkerConfig): Promise => {
  return new Promise((resolve, reject) => {
    try {
      const worker = new Worker(
        URL.createObjectURL(new Blob([workerScript], { type: "application/javascript" })),
        { type: "module", deno: { permissions: { net: true, env: true, read: true } } }
      );
      
      config.worker = worker;
      
      worker.onmessage = (e) => {
        if (e.data.type === "error") {
          console.error(Worker ${config.id} error:, e.data.error);
          // Restart worker after error
          setTimeout(() => restartWorker(config), 5000);
        }
      };
      
      worker.onerror = (error) => {
        console.error(Worker ${config.id} crashed:, error);
        setTimeout(() => restartWorker(config), 5000);
      };
      
      // Send configuration to worker
      worker.postMessage({ workerId: config.id, port: config.port });
      
      console.log(Started worker ${config.id} on port ${config.port});
      resolve();
    } catch (error) {
      reject(error);
    }
  });
};

// Function to restart a worker
const restartWorker = async (config: WorkerConfig) => {
  console.log(Restarting worker ${config.id}...);
  
  if (config.worker) {
    config.worker.terminate();
  }
  
  try {
    await startWorker(config);
    console.log(Worker ${config.id} restarted successfully);
  } catch (error) {
    console.error(Failed to restart worker ${config.id}:, error);
  }
};

// Start all workers
console.log(Starting ${WORKER_COUNT} Deno workers...);

for (const worker of workers) {
  try {
    await startWorker(worker);
    // Small delay between worker starts
    await new Promise(resolve => setTimeout(resolve, 100));
  } catch (error) {
    console.error(Failed to start worker ${worker.id}:, error);
  }
}

console.log("All workers started. Cluster is ready.");
console.log(Workers listening on ports ${BASE_PORT}-${BASE_PORT + WORKER_COUNT - 1});

// Graceful shutdown
Deno.addSignalListener("SIGTERM", () => {
  console.log("Shutting down cluster...");
  
  for (const worker of workers) {
    if (worker.worker) {
      worker.worker.terminate();
    }
  }
  
  Deno.exit(0);
});

// Keep the main process alive
setInterval(() => {
  // Health check for workers could be implemented here
}, 30000);

Configure HAProxy load balancer

Set up HAProxy to distribute traffic across the Deno worker processes with health checks and session persistence.

global
    daemon
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    log stdout local0 info
    
    # SSL/TLS configuration
    tune.ssl.default-dh-param 2048
    ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
    ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11

defaults
    mode http
    log global
    option httplog
    option dontlognull
    timeout connect 5000
    timeout client 50000
    timeout server 50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

Statistics interface

frontend stats bind *:8404 stats enable stats uri /stats stats refresh 5s stats admin if TRUE

Main frontend for Deno applications

frontend deno_frontend bind *:80 bind *:443 ssl crt /etc/ssl/certs/example.com.pem # Security headers http-response set-header X-Frame-Options DENY http-response set-header X-Content-Type-Options nosniff http-response set-header X-XSS-Protection "1; mode=block" http-response set-header Referrer-Policy strict-origin-when-cross-origin # Redirect HTTP to HTTPS redirect scheme https if !{ ssl_fc } default_backend deno_cluster

Backend configuration for Deno workers

backend deno_cluster balance roundrobin option httpchk GET /health http-check expect status 200 # Worker servers server worker1 127.0.0.1:8001 check inter 5s fall 3 rise 2 server worker2 127.0.0.1:8002 check inter 5s fall 3 rise 2 server worker3 127.0.0.1:8003 check inter 5s fall 3 rise 2 server worker4 127.0.0.1:8004 check inter 5s fall 3 rise 2 # Backup server configuration # server backup1 127.0.0.1:8005 check backup # Connection limits and timeouts timeout check 3s option log-health-checks

Create systemd service for Deno cluster

Set up a systemd service to manage the Deno cluster with proper process isolation and resource limits.

[Unit]
Description=Deno Application Cluster
After=network.target
Wants=network-online.target

[Service]
Type=simple
User=deno
Group=deno
WorkingDirectory=/opt/deno-cluster
ExecStart=/home/deno/.deno/bin/deno run --allow-net --allow-env --allow-read --unstable cluster.ts
Restart=always
RestartSec=10
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=30

Environment variables

Environment=WORKER_COUNT=4 Environment=NODE_ENV=production

Security settings

NoNewPrivileges=true PrivateTmp=true ProtectSystem=strict ProtectHome=true ReadWritePaths=/opt/deno-cluster

Resource limits

LimitNOFILE=65536 LimitNPROC=4096

Logging

StandardOutput=journal StandardError=journal SyslogIdentifier=deno-cluster [Install] WantedBy=multi-user.target

Configure HAProxy logging

Enable detailed logging for HAProxy to monitor load balancer performance and troubleshoot issues.

# HAProxy log configuration
$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1

HAProxy logs

local0.* /var/log/haproxy.log & stop

Set proper file permissions

Configure correct ownership and permissions for all configuration files and directories.

Never use chmod 777. It gives every user on the system full access to your files. Instead, fix ownership with chown and use minimal permissions.
sudo chown -R deno:deno /opt/deno-cluster
sudo chmod 755 /opt/deno-cluster
sudo chmod 644 /opt/deno-cluster/*.ts
sudo chmod 644 /etc/systemd/system/deno-cluster.service
sudo chmod 644 /etc/haproxy/haproxy.cfg
sudo chmod 644 /etc/rsyslog.d/49-haproxy.conf

Enable and start services

Start the Deno cluster and HAProxy services, enabling them to start automatically on system boot.

sudo systemctl daemon-reload
sudo systemctl enable --now deno-cluster
sudo systemctl restart rsyslog
sudo systemctl enable --now haproxy
sudo systemctl status deno-cluster
sudo systemctl status haproxy

Configure firewall rules

Open the necessary ports for HTTP/HTTPS traffic and HAProxy statistics while maintaining security.

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 8404/tcp comment "HAProxy Stats"
sudo ufw reload
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --permanent --add-port=443/tcp
sudo firewall-cmd --permanent --add-port=8404/tcp
sudo firewall-cmd --reload

Configure health monitoring

Create monitoring script

Set up a monitoring script that checks cluster health and can restart failed workers automatically.

#!/usr/bin/env -S deno run --allow-net --allow-run

interface HealthStatus {
  worker: string;
  healthy: boolean;
  responseTime: number;
  error?: string;
}

const WORKER_PORTS = [8001, 8002, 8003, 8004];
const HEALTH_CHECK_INTERVAL = 30000; // 30 seconds
const MAX_RESPONSE_TIME = 5000; // 5 seconds

async function checkWorkerHealth(port: number): Promise {
  const startTime = Date.now();
  
  try {
    const response = await fetch(http://localhost:${port}/health, {
      signal: AbortSignal.timeout(MAX_RESPONSE_TIME)
    });
    
    const responseTime = Date.now() - startTime;
    
    if (response.ok) {
      const data = await response.json();
      return {
        worker: data.worker || port-${port},
        healthy: true,
        responseTime
      };
    } else {
      return {
        worker: port-${port},
        healthy: false,
        responseTime,
        error: HTTP ${response.status}
      };
    }
  } catch (error) {
    return {
      worker: port-${port},
      healthy: false,
      responseTime: Date.now() - startTime,
      error: error.message
    };
  }
}

async function checkClusterHealth(): Promise {
  const checks = WORKER_PORTS.map(port => checkWorkerHealth(port));
  return await Promise.all(checks);
}

async function logHealthStatus() {
  const healthStatuses = await checkClusterHealth();
  const timestamp = new Date().toISOString();
  
  console.log(\n[${timestamp}] Cluster Health Check);
  console.log("═".repeat(50));
  
  let healthyCount = 0;
  
  for (const status of healthStatuses) {
    const statusIcon = status.healthy ? "✓" : "✗";
    const healthText = status.healthy ? "HEALTHY" : "UNHEALTHY";
    
    console.log(${statusIcon} ${status.worker}: ${healthText} (${status.responseTime}ms));
    
    if (status.error) {
      console.log(  Error: ${status.error});
    }
    
    if (status.healthy) {
      healthyCount++;
    }
  }
  
  console.log("═".repeat(50));
  console.log(Healthy workers: ${healthyCount}/${WORKER_PORTS.length});
  
  if (healthyCount === 0) {
    console.error("⚠️  ALL WORKERS DOWN! Attempting to restart cluster...");
    await restartCluster();
  } else if (healthyCount < WORKER_PORTS.length) {
    console.warn(⚠️  ${WORKER_PORTS.length - healthyCount} workers are down);
  }
}

async function restartCluster() {
  try {
    console.log("Restarting Deno cluster...");
    const process = new Deno.Command("sudo", {
      args: ["systemctl", "restart", "deno-cluster"]
    });
    
    const { success } = await process.output();
    
    if (success) {
      console.log("✓ Cluster restart initiated");
    } else {
      console.error("✗ Failed to restart cluster");
    }
  } catch (error) {
    console.error("Error restarting cluster:", error);
  }
}

// Main monitoring loop
console.log("Starting Deno cluster health monitor...");
console.log(Monitoring ${WORKER_PORTS.length} workers on ports: ${WORKER_PORTS.join(", ")});
console.log(Check interval: ${HEALTH_CHECK_INTERVAL / 1000} seconds\n);

// Initial health check
await logHealthStatus();

// Set up periodic health checks
setInterval(logHealthStatus, HEALTH_CHECK_INTERVAL);

// Graceful shutdown
Deno.addSignalListener("SIGTERM", () => {
  console.log("\nHealth monitor shutting down...");
  Deno.exit(0);
});

Create monitoring service

Set up a systemd service for the health monitoring script to run continuously.

[Unit]
Description=Deno Cluster Health Monitor
After=deno-cluster.service
Requires=deno-cluster.service

[Service]
Type=simple
User=deno
Group=deno
WorkingDirectory=/opt/deno-cluster
ExecStart=/home/deno/.deno/bin/deno run --allow-net --allow-run monitor.ts
Restart=always
RestartSec=15

Logging

StandardOutput=journal StandardError=journal SyslogIdentifier=deno-monitor [Install] WantedBy=multi-user.target

Enable monitoring service

Start the monitoring service to begin continuous health checking of your Deno cluster.

sudo chmod +x /opt/deno-cluster/monitor.ts
sudo systemctl daemon-reload
sudo systemctl enable --now deno-monitor
sudo systemctl status deno-monitor

Verify your setup

Test the clustered Deno application and load balancer configuration to ensure everything is working correctly.

# Check all services are running
sudo systemctl status deno-cluster haproxy deno-monitor

Test individual workers

curl -s http://localhost:8001/health | jq curl -s http://localhost:8002/health | jq curl -s http://localhost:8003/health | jq curl -s http://localhost:8004/health | jq

Test load balancer

curl -s http://localhost/status curl -s http://localhost/health

Check HAProxy stats

curl -s http://localhost:8404/stats

View service logs

journalctl -u deno-cluster -f --lines=20 journalctl -u deno-monitor -f --lines=20 sudo tail -f /var/log/haproxy.log

Test load distribution

for i in {1..10}; do curl -s http://localhost/ | grep Worker; done

Performance optimization

Configure worker count optimization

Adjust the number of worker processes based on your server's CPU cores and expected load.

# Check CPU cores
nproc

Update worker count (recommended: CPU cores * 1.5)

sudo systemctl edit deno-cluster
[Service]
Environment=WORKER_COUNT=6

Configure connection limits

Optimize HAProxy connection limits and timeouts for your specific use case and expected traffic patterns.

# Add to defaults section
maxconn 4096
option http-keep-alive
timeout http-keep-alive 10s
timeout http-request 10s

Add to backend section

maxconn 1024 option prefer-last-server

Common issues

Symptom Cause Fix
Workers won't start Permission denied or port conflict sudo chown -R deno:deno /opt/deno-cluster and check ports with netstat -tlpn
HAProxy shows all servers down Health check endpoint not responding Test individual workers: curl http://localhost:8001/health
502 Bad Gateway errors No healthy backend servers Check worker status: systemctl status deno-cluster
High memory usage Too many workers for available RAM Reduce WORKER_COUNT or add memory limits to systemd service
SSL certificate errors Missing or invalid certificates Generate certificates or remove SSL config from HAProxy
Monitoring script fails Missing sudo permissions for restart Add deno user to sudoers for systemctl restart command

Next steps

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.