Infrastructure tutorials
Production-grade guides for Linux, servers, security and performance. Copy-paste commands, multi-distro support, written by engineers who run this in production.
Browse by topic
Linux
System administration, shell scripting, package management
Hosting & Servers
Web servers, reverse proxies, SSL, domains
Security
Firewalls, hardening, encryption, access control
Performance
Caching, optimization, profiling, load testing
Databases
MySQL, PostgreSQL, Redis, backups, replication
Networking
DNS, load balancing, VPN, TCP/IP, routing
DevOps
CI/CD, Docker, Kubernetes, automation
Monitoring
Logging, alerting, metrics, observability
Most viewed
Install and configure CockroachDB cluster with high availability and distributed SQL
databasesInstall and configure ArgoCD for GitOps continuous deployment with RBAC and SSL
devopsInstall and configure PostgreSQL 17 with performance tuning and security hardening
databasesInstall and configure WireGuard VPN server with client management
networkingInstall and configure Loki for centralized log aggregation with Grafana integration
monitoringRecently published
Configure SSL encryption and authentication for ClamAV cluster with high availability scanning
securityImplement MinIO security hardening with IAM policies and audit logging
securitySetup Elasticsearch monitoring with Metricbeat and Kibana dashboards
monitoringConfigure Vault auto-unseal with AWS KMS for high availability secrets management
securityImplement MariaDB connection pooling with ProxySQL for high availability
databasesInstall and configure WireGuard VPN server with client management
Set up a secure WireGuard VPN server with automated client management, including key generation, firewall configuration, and traffic routing for remote access.
Install and configure Traefik reverse proxy with SSL automation
Set up Traefik as a reverse proxy with Docker Compose for automatic SSL certificate management, service discovery, and load balancing across multiple backend services.
Install and configure Ollama for local AI models on Linux servers
Set up Ollama to run large language models locally on your Linux server. This tutorial covers installation, GPU acceleration, model deployment, API configuration, and performance optimization.
Need help?
Don't want to manage this yourself?
We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.
Talk to an engineer