Infrastructure tutorials
Production-grade guides for Linux, servers, security and performance. Copy-paste commands, multi-distro support, written by engineers who run this in production.
Browse by topic
Linux
System administration, shell scripting, package management
Hosting & Servers
Web servers, reverse proxies, SSL, domains
Security
Firewalls, hardening, encryption, access control
Performance
Caching, optimization, profiling, load testing
Databases
MySQL, PostgreSQL, Redis, backups, replication
Networking
DNS, load balancing, VPN, TCP/IP, routing
DevOps
CI/CD, Docker, Kubernetes, automation
Monitoring
Logging, alerting, metrics, observability
Most viewed
Install and configure Deno for web development with systemd and reverse proxy
hostingInstall and configure Caddy web server with automatic HTTPS and reverse proxy
hostingInstall and configure Uvicorn ASGI server with systemd and reverse proxy for FastAPI applications
hostingInstall and configure Ollama for local AI models on Linux servers
devopsConfigure Linux system time synchronization with chrony and NTP hardening
linuxRecently published
Configure Django Redis caching and session storage for high-performance web applications
performanceSet up Kafka Streams testing framework with TopologyTestDriver for automated stream processing validation
devopsBenchmark database performance with sysbench and fio integration
databasesConfigure automated system maintenance with advanced cron scheduling and shell scripts
linuxConfigure Consul multi-datacenter WAN federation for geographic redundancy
devopsInstall and configure Traefik reverse proxy with SSL automation
Set up Traefik as a reverse proxy with Docker Compose for automatic SSL certificate management, service discovery, and load balancing across multiple backend services.
Install and configure Podman for rootless containers on Linux
Learn to install Podman and configure rootless containers as a secure Docker alternative. Includes Docker Compose migration, systemd integration, and troubleshooting common permission issues.
Install and configure Ollama for local AI models on Linux servers
Set up Ollama to run large language models locally on your Linux server. This tutorial covers installation, GPU acceleration, model deployment, API configuration, and performance optimization.
Need help?
Don't want to manage this yourself?
We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.
Talk to an engineer