Fix high load average with low CPU usage on web servers

Beginner 25 min Apr 17, 2026 11 views
Ubuntu 24.04 Ubuntu 22.04 Debian 12 AlmaLinux 9 Rocky Linux 9 Fedora 41

Diagnose and resolve high load average issues when CPU usage remains low on Linux web servers. Learn to identify I/O bottlenecks, blocked processes, and system resource contention affecting performance.

Prerequisites

  • Root or sudo access to the server
  • Basic understanding of Linux command line
  • Web server or application experiencing high load average

What this solves

You see load averages of 5, 10, or higher on your web server, but CPU usage stays low at 10-20%. This indicates processes are waiting for resources like disk I/O, network operations, or system locks rather than using CPU cycles. This tutorial shows you how to diagnose the root cause and fix common bottlenecks that create high load with low CPU usage.

Understanding load average vs CPU usage

Load average measures how many processes are either running on CPU or waiting for resources (disk, network, locks). CPU usage only shows active processing time. A system can have high load average with low CPU usage when processes spend time waiting rather than computing.

Load average shows three numbers representing 1-minute, 5-minute, and 15-minute averages. On a single-core system, load average of 1.0 means the system is fully utilized. On a 4-core system, load average of 4.0 indicates full utilization.

Check current load average and CPU usage

View your system's current load and CPU statistics to confirm the issue.

uptime
top -n 1 | head -5
cat /proc/loadavg

The output shows load averages and CPU usage percentages. High load (above number of CPU cores) with low CPU usage indicates resource waiting.

Step-by-step diagnosis

Install diagnostic tools

Install system monitoring tools to analyze process behavior and resource usage.

sudo apt update
sudo apt install -y htop iotop sysstat lsof
sudo dnf install -y htop iotop sysstat lsof

Identify I/O wait processes

Check if processes are waiting for disk I/O operations, which commonly causes high load with low CPU usage.

iostat -x 1 5
iotop -o

High %iowait values in iostat output indicate disk bottlenecks. The iotop command shows which processes are performing heavy disk operations.

Check for blocked processes

Examine processes in uninterruptible sleep state (D state) that contribute to load average.

ps auxf | grep " D "
ps -eo pid,ppid,state,comm --sort=-pid | grep " D "

Monitor disk utilization

Analyze disk performance metrics to identify storage bottlenecks.

df -h
du -sh /var/log/* | sort -rh | head -10
lsof +L1

Check disk space usage and identify large log files or deleted files still held open by processes. The lsof command shows files marked for deletion but still consuming space.

Examine network I/O waiting

Check if processes are waiting for network operations that could increase load average.

netstat -i
ss -tuln
lsof -i

Check for zombie processes

Identify zombie processes that might contribute to system load issues.

ps aux | grep ""
ps -eo pid,ppid,state,comm | grep " Z "

Common fixes for high load low CPU

Optimize disk I/O performance

Adjust I/O scheduler and mount options for better disk performance. Modern SSDs benefit from noop or mq-deadline schedulers.

cat /sys/block/sda/queue/scheduler
echo mq-deadline | sudo tee /sys/block/sda/queue/scheduler

Make the change permanent by adding the kernel parameter. You can also reference our guide on optimizing Linux I/O performance with kernel tuning for comprehensive storage optimization.

Clear large log files

Truncate large log files that might be causing disk I/O bottlenecks without stopping services.

sudo find /var/log -name "*.log" -size +100M -ls
sudo truncate -s 0 /var/log/large-application.log
sudo systemctl reload rsyslog

Increase file descriptor limits

Raise system limits for file descriptors that might cause processes to wait for resources.

* soft nofile 65536
* hard nofile 65536
root soft nofile 65536
root hard nofile 65536

Configure kernel parameters for high load scenarios

Adjust kernel parameters to handle high concurrent connections and I/O operations better.

vm.dirty_ratio = 15
vm.dirty_background_ratio = 5
vm.vfs_cache_pressure = 50
fs.file-max = 2097152

Apply the changes immediately and verify they're active.

sudo sysctl -p /etc/sysctl.d/99-high-load.conf
sysctl vm.dirty_ratio vm.dirty_background_ratio

For detailed kernel parameter optimization, see our guide on configuring Linux kernel parameters with sysctl.

Restart hung processes

Identify and restart processes stuck in uninterruptible sleep state that cannot be killed normally.

sudo systemctl restart apache2
sudo systemctl restart nginx
sudo systemctl restart mysql

Web server specific optimizations

Optimize Apache/Nginx worker processes

Adjust web server configuration to handle concurrent connections more efficiently.

worker_processes auto;
worker_connections 1024;
keepalive_timeout 30;
client_max_body_size 10m;

Configure database connection pooling

Reduce database connection overhead that can cause processes to wait for available connections.

max_connections = 200
connect_timeout = 5
wait_timeout = 600
max_allowed_packet = 64M

Enable web server caching

Implement caching to reduce disk I/O from repeated file access.

location ~* \.(jpg|jpeg|png|gif|css|js)$ {
    expires 30d;
    add_header Cache-Control "public, immutable";
}

Monitoring and prevention strategies

Set up continuous monitoring

Install monitoring tools to track load average trends and get early warnings.

sudo crontab -e
/5    * uptime >> /var/log/load-average.log

For comprehensive system monitoring, consider setting up process monitoring with htop and other tools.

Create load average alerts

Set up email alerts when load average exceeds acceptable thresholds.

#!/bin/bash
LOAD=$(uptime | awk '{print $12}' | cut -d',' -f1)
THRESHOLD=4.0
if (( $(echo "$LOAD > $THRESHOLD" | bc -l) )); then
    echo "High load average: $LOAD" | mail -s "Load Alert" admin@example.com
fi
sudo chmod +x /usr/local/bin/check-load.sh
sudo crontab -e
/10     /usr/local/bin/check-load.sh

Implement log rotation

Configure automatic log rotation to prevent large log files from causing I/O bottlenecks.

/var/log/myapp/.log {
    daily
    rotate 30
    compress
    delaycompress
    missingok
    notifempty
    create 644 www-data www-data
    postrotate
        systemctl reload myapp
    endscript
}

Verify your setup

uptime
iostat -x 1 3
ps -eo pid,ppid,state,comm | grep " D " | wc -l
free -m
df -h

Monitor these metrics over time to confirm load average decreases while maintaining low CPU usage. The number of processes in D state should be minimal.

Common issues

SymptomCauseFix
Load stays high after fixesHardware failure or kernel bugCheck dmesg for errors, consider reboot
iotop shows no disk activityNetwork I/O or memory pressureCheck free -m and network connections
Cannot kill D state processesKernel-level wait for resourcesRestart related services or reboot system
Load increases during backupsI/O scheduler not optimizedRun backups with ionice -c3 priority
MySQL processes in D stateDisk corruption or table locksCheck table integrity with mysqlcheck

Next steps

Running this in production?

Need this monitored 24/7? This works for diagnosing occasional load spikes. When you run web servers that need constant availability, keeping them optimized and responding to performance issues is the harder part. See how we run infrastructure like this for European teams with automated monitoring and incident response.

Need help?

Don't want to manage this yourself?

We handle managed cloud infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.