Infrastructure tutorials
Production-grade guides for Linux, servers, security and performance. Copy-paste commands, multi-distro support, written by engineers who run this in production.
Browse by topic
Linux
System administration, shell scripting, package management
Hosting & Servers
Web servers, reverse proxies, SSL, domains
Security
Firewalls, hardening, encryption, access control
Performance
Caching, optimization, profiling, load testing
Databases
MySQL, PostgreSQL, Redis, backups, replication
Networking
DNS, load balancing, VPN, TCP/IP, routing
DevOps
CI/CD, Docker, Kubernetes, automation
Monitoring
Logging, alerting, metrics, observability
Most viewed
Configure Linux system time synchronization with chrony and NTP hardening
linuxSet up Node.js application security with Helmet and rate limiting
securityInstall and configure Caddy web server with automatic HTTPS and reverse proxy
hostingInstall and configure Uvicorn ASGI server with systemd and reverse proxy for FastAPI applications
hostingInstall and configure PostgreSQL 17 with performance tuning and security hardening
databasesRecently published
Configure systemd service resource limits and security isolation
linuxConfigure NGINX SSL termination with Redis session storage
hostingConfigure intrusion detection with OSSEC and fail2ban integration
securitySet up Varnish 7 cluster with load balancing across multiple backends
performanceConfigure OSSEC active response for automated threat blocking
securityImplement Kubernetes pod disruption budgets for high availability during scaling events
Configure Pod Disruption Budgets to ensure application availability during cluster maintenance and scaling operations. Learn to implement PDB policies, test disruption scenarios, and maintain service continuity in Kubernetes.
Configure Kubernetes vertical pod autoscaler for resource optimization and cost management
Set up VPA to automatically adjust CPU and memory requests for your Kubernetes workloads. Reduce resource waste and optimize costs by letting VPA analyze actual usage patterns and rightsizing containers.
Set up Kubernetes custom metrics autoscaling with Prometheus adapter for application-specific scaling
Configure Prometheus adapter to expose custom application metrics to Kubernetes Horizontal Pod Autoscaler for intelligent scaling based on business metrics like queue depth, response time, and user load instead of basic CPU/memory usage.
Configure Kubernetes horizontal pod autoscaler for dynamic scaling based on resource metrics
Set up HPA with CPU and memory targets for automatic pod scaling. Configure metrics server and Prometheus adapter for custom metrics monitoring. Enable dynamic workload scaling based on resource utilization.
Implement Kubernetes cluster autoscaling with Helm charts and KEDA for dynamic workload scaling
Configure comprehensive Kubernetes autoscaling with cluster autoscaler for node management, KEDA for event-driven pod scaling, and vertical pod autoscaler for resource optimization. This tutorial covers production-grade deployment using Helm charts with monitoring and optimization strategies.
Need help?
Don't want to manage this yourself?
We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.
Talk to an engineer