Learn to configure Linux system resource limits using systemd, ulimit, and /etc/security/limits.conf to prevent application failures from resource exhaustion. Master per-user, per-service, and system-wide limits for optimal performance.
Prerequisites
- Root or sudo access
- Basic understanding of Linux system administration
- SystemD-based Linux distribution
What this solves
Linux system resource limits control how much memory, CPU time, file descriptors, and processes applications can use. When applications hit these limits, you see errors like "too many open files", process crashes, or performance degradation. This tutorial shows you how to configure system-wide, per-user, and per-service resource limits using systemd, ulimit, and traditional Linux limit mechanisms to ensure stable application performance.
Understanding Linux resource limits
Linux implements resource limits through the kernel's rlimit system, which controls resources like open files, memory usage, CPU time, and process counts. These limits exist at multiple levels: system-wide kernel limits, user session limits via PAM and limits.conf, and service-specific limits via systemd unit files.
Modern systemd-based distributions use systemd to manage service limits, while user session limits are controlled through /etc/security/limits.conf and PAM modules. Understanding the hierarchy is crucial: systemd service limits override user limits, and kernel limits override everything.
Check current resource limits
View current limits for your session
Check the resource limits currently applied to your shell session.
ulimit -a
Check limits for a running process
View the resource limits for any running process by examining its proc filesystem entry.
cat /proc/$(pgrep nginx | head -1)/limits
View systemd service limits
Check the resource limits configured for a systemd service.
systemctl show nginx --property=LimitNOFILE --property=LimitNPROC --property=LimitMEMLOCK
Step-by-step configuration
Configure system-wide limits with systemd
Create a systemd configuration file to set system-wide default limits for all services.
sudo mkdir -p /etc/systemd/system.conf.d
[Manager]
DefaultLimitNOFILE=65536
DefaultLimitNPROC=32768
DefaultLimitMEMLOCK=infinity
DefaultLimitCORE=infinity
Configure per-service limits with systemd
Create a systemd drop-in directory to override limits for specific services like nginx.
sudo mkdir -p /etc/systemd/system/nginx.service.d
[Service]
LimitNOFILE=100000
LimitNPROC=65536
LimitMEMLOCK=infinity
Configure user session limits
Edit the limits.conf file to set resource limits for user sessions and login processes.
sudo cp /etc/security/limits.conf /etc/security/limits.conf.backup
# Add these lines to the end of the file
* soft nofile 65536
* hard nofile 65536
* soft nproc 32768
* hard nproc 32768
www-data soft nofile 100000
www-data hard nofile 100000
root soft nofile 65536
root hard nofile 65536
Configure kernel limits
Set kernel-level limits that apply system-wide using sysctl parameters.
# Maximum number of open files system-wide
fs.file-max = 2097152
Maximum number of processes system-wide
kernel.pid_max = 4194304
Maximum number of memory map areas
vm.max_map_count = 262144
Apply the configuration changes
Reload systemd configuration and apply kernel parameter changes.
sudo systemctl daemon-reload
sudo sysctl --system
Restart services to apply limits
Restart the services you modified to apply the new resource limits.
sudo systemctl restart nginx
sudo systemctl restart systemd-logind
Configure specific resource limits
File descriptor limits (NOFILE)
Configure limits for the number of open files, crucial for web servers and databases handling many connections.
[Service]
LimitNOFILE=100000
LimitNOFILESoft=100000
Process limits (NPROC)
Set limits on the number of processes a user or service can create.
[Service]
LimitNPROC=65536
LimitNPROCSoft=32768
Memory limits (AS, RSS, MEMLOCK)
Configure memory-related limits including virtual memory and locked memory.
[Service]
LimitMEMLOCK=infinity
LimitAS=infinity
LimitRSS=8G
Temporary limit adjustments
Set temporary limits with ulimit
Adjust limits temporarily for the current shell session or script execution.
# Increase file descriptor limit for current session
ulimit -n 100000
Set memory limit to 2GB
ulimit -v 2097152
Set maximum processes to 16384
ulimit -u 16384
Apply limits to specific commands
Use the prlimit command to set limits for specific processes or commands.
# Run command with specific limits
prlimit --nofile=50000:100000 --nproc=10000:20000 your-application
Set limits for running process
sudo prlimit --pid 1234 --nofile=100000
Verify your setup
# Check current ulimit settings
ulimit -a
Verify systemd service limits
systemctl show nginx --property=LimitNOFILE --property=LimitNPROC
Check kernel limits
sysctl fs.file-max kernel.pid_max
View limits for a specific process
cat /proc/$(pgrep nginx | head -1)/limits
Check system-wide file descriptor usage
cat /proc/sys/fs/file-nr
Troubleshoot common limit-related errors
Debug "too many open files" errors
Identify which processes are consuming file descriptors and adjust limits accordingly.
# Find processes using the most file descriptors
sudo lsof | awk '{print $2}' | sort | uniq -c | sort -nr | head -10
Check file descriptor usage for specific process
sudo ls -la /proc/$(pgrep nginx | head -1)/fd | wc -l
Monitor resource usage
Set up monitoring to track resource consumption and prevent limit-related issues.
# Create monitoring script
sudo tee /usr/local/bin/check-limits.sh << 'EOF'
#!/bin/bash
echo "=== System-wide limits ==="
echo "Open files: $(cat /proc/sys/fs/file-nr)"
echo "Max files: $(cat /proc/sys/fs/file-max)"
echo "Processes: $(ps aux | wc -l)"
echo "Max PID: $(cat /proc/sys/kernel/pid_max)"
echo -e "\n=== Top processes by open files ==="
for pid in $(ps -eo pid --no-headers | head -10); do
if [ -d /proc/$pid/fd ]; then
count=$(ls /proc/$pid/fd 2>/dev/null | wc -l)
comm=$(cat /proc/$pid/comm 2>/dev/null || echo "unknown")
echo "$pid ($comm): $count files"
fi
done | sort -k3 -nr
EOF
sudo chmod +x /usr/local/bin/check-limits.sh
Performance optimization with limits
Resource limits directly impact application performance. Web servers like nginx and Apache need high file descriptor limits to handle concurrent connections. Database servers require adequate process and memory limits. Setting limits too low causes application failures, while setting them too high can lead to resource exhaustion affecting the entire system.
For high-performance web servers, consider file descriptor limits of 100,000 or more. Database servers often need unlimited memory locking (MEMLOCK=infinity) for optimal performance. Applications handling many concurrent processes benefit from increased NPROC limits. Always monitor actual resource usage and adjust limits based on real workload patterns rather than arbitrary high values.
This configuration approach complements other performance optimizations covered in our Apache performance tuning and Linux system performance optimization tutorials.
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| "too many open files" error | NOFILE limit too low | Increase LimitNOFILE in systemd service or limits.conf |
| "Cannot fork" or process creation fails | NPROC limit exceeded | Increase LimitNPROC or nproc in limits.conf |
| Application crashes with memory errors | Virtual memory limit reached | Increase LimitAS or remove memory limits for service |
| ulimit changes don't persist | Not configured in limits.conf | Add permanent limits to /etc/security/limits.conf |
| systemd service ignores limits.conf | systemd overrides user limits | Configure limits in systemd service drop-in files |
| Database performance poor | Memory locking disabled | Set LimitMEMLOCK=infinity for database services |
| Web server connection drops | Insufficient file descriptors | Monitor with lsof and increase NOFILE limits |
| Changes not applied after reboot | Configuration not loaded | Verify sysctl.conf and run systemctl daemon-reload |
Next steps
- Configure Linux memory management and swap optimization for high-performance workloads
- Optimize Linux I/O performance with kernel tuning and storage schedulers
- Configure Linux process scheduling and CPU affinity for performance optimization
- Monitor Linux system resources with performance alerts and automated responses