Optimize Linux filesystem performance with mount options and I/O schedulers

Beginner 25 min Apr 03, 2026 14 views
Ubuntu 24.04 Ubuntu 22.04 Debian 12 AlmaLinux 9 Rocky Linux 9 Fedora 41

Learn to optimize Linux filesystem performance by configuring I/O schedulers, tuning ext4 mount options, and monitoring disk performance with iostat and iotop for high-throughput workloads.

Prerequisites

  • Root or sudo access
  • Basic understanding of Linux filesystems
  • Storage devices to optimize

What this solves

Filesystem performance is critical for applications with heavy disk I/O requirements like databases, web servers, and analytics workloads. Default filesystem settings often prioritize data safety over performance, leaving significant optimization opportunities on the table. This tutorial shows you how to optimize I/O schedulers, configure filesystem-specific mount options, and monitor performance metrics to achieve better throughput and lower latency.

Step-by-step configuration

Install required monitoring tools

Install iostat, iotop, and other performance monitoring utilities to baseline and measure filesystem performance improvements.

sudo apt update
sudo apt install -y sysstat iotop hdparm fio
sudo dnf install -y sysstat iotop hdparm fio

Check current I/O scheduler

View the current I/O scheduler for each block device to understand your baseline configuration.

cat /sys/block/sda/queue/scheduler
ls /sys/block/ | grep -v loop | xargs -I {} sh -c 'echo "Device {}: $(cat /sys/block/{}/queue/scheduler)"'

Configure optimal I/O scheduler

Set the I/O scheduler based on your storage type and workload. Use mq-deadline for SSDs with general workloads, kyber for high IOPS requirements, or none for NVMe devices with very fast storage.

# For SSDs and general workloads (recommended default)
echo mq-deadline | sudo tee /sys/block/sda/queue/scheduler

For high-IOPS workloads requiring low latency

echo kyber | sudo tee /sys/block/sda/queue/scheduler

For NVMe with very fast storage (bypasses scheduler)

echo none | sudo tee /sys/block/nvme0n1/queue/scheduler

Make I/O scheduler changes persistent

Configure udev rules to automatically apply the optimal I/O scheduler on boot based on device type.

# Set mq-deadline for SATA SSDs
ACTION=="add|change", KERNEL=="sd[a-z]*", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"

Set kyber for high-performance SSDs (adjust by model if needed)

ACTION=="add|change", KERNEL=="sd[a-z]*", ATTR{queue/rotational}=="0", ATTR{size}==">1000000000", ATTR{queue/scheduler}="kyber"

Set none for NVMe devices

ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/scheduler}="none"

Optimize ext4 mount options

Configure performance-oriented mount options for ext4 filesystems. The noatime option eliminates access time updates, and data=writeback improves write performance by not forcing data writes before metadata.

# Backup current fstab
sudo cp /etc/fstab /etc/fstab.backup

Check current mount options

mount | grep ext4
# Example optimized ext4 mount options

Replace /dev/sda1 and / with your actual device and mount point

/dev/sda1 / ext4 defaults,noatime,data=writeback,barrier=0,commit=60 0 1

For data partitions where performance is critical

/dev/sda2 /var/lib/mysql ext4 defaults,noatime,data=writeback,barrier=0,commit=60,nodelalloc 0 2
Warning: The data=writeback and barrier=0 options improve performance but reduce data safety. Use only on systems with UPS power protection or where performance is more critical than data durability.

Configure XFS performance options

For XFS filesystems, optimize allocation and logging options for better performance with large files and high concurrency.

# Optimized XFS mount options
/dev/sdb1 /data xfs defaults,noatime,largeio,inode64,swalloc,logbsize=256k 0 2

Apply new mount options

Remount filesystems with new options or reboot to apply all changes. Test one filesystem at a time in production environments.

# Test fstab syntax
sudo findmnt --verify

Remount specific filesystem (if possible)

sudo mount -o remount /

Or reboot to apply all changes

sudo reboot

Tune filesystem readahead

Increase readahead values for sequential workloads like large file processing or streaming. This prefetches more data, reducing seeks for sequential reads.

# Check current readahead
sudo blockdev --getra /dev/sda

Increase readahead for sequential workloads (adjust based on workload)

sudo blockdev --setra 4096 /dev/sda

Make persistent by adding to rc.local or systemd service

echo 'blockdev --setra 4096 /dev/sda' | sudo tee -a /etc/rc.local

Configure Btrfs optimization

For Btrfs filesystems, disable copy-on-write for database files and configure optimal mount options for performance-critical applications.

# Optimized Btrfs mount options
/dev/sdc1 /data btrfs defaults,noatime,compress=lzo,space_cache=v2,commit=60 0 0
# Disable COW for database directories
sudo chattr +C /var/lib/mysql
sudo chattr +C /var/lib/postgresql

Optimize kernel parameters

Tune kernel parameters that affect filesystem and I/O performance. These settings optimize page cache behavior and I/O scheduling.

# Reduce swappiness to keep filesystem cache in memory
vm.swappiness = 1

Allow more dirty pages in memory before forcing writes

vm.dirty_ratio = 15 vm.dirty_background_ratio = 5

Reduce dirty page cache timeout

vm.dirty_expire_centisecs = 1500 vm.dirty_writeback_centisecs = 500

Increase inode and dentry cache

fs.file-max = 2097152

Optimize for high-throughput workloads

kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000

Apply kernel parameter changes

Load the new kernel parameters and verify they are applied correctly.

sudo sysctl --system
sudo sysctl vm.dirty_ratio vm.swappiness

Monitor filesystem performance

Use iostat for I/O monitoring

Monitor disk I/O statistics to identify bottlenecks and measure performance improvements. The key metrics are IOPS, throughput, and wait times.

# Monitor I/O every 2 seconds, 5 iterations
iostat -x 2 5

Monitor specific device

iostat -x /dev/sda 2 5

Extended statistics with detailed metrics

iostat -dx 1

Use iotop for process-level I/O monitoring

Identify which processes are generating the most I/O to optimize application-specific performance.

# Monitor I/O by process (requires root)
sudo iotop -o

Show accumulated I/O instead of bandwidth

sudo iotop -a

Monitor specific processes

sudo iotop -p $(pidof mysql)

Benchmark filesystem performance

Use fio to benchmark your filesystem performance before and after optimizations to measure improvements.

# Random read/write benchmark
sudo fio --name=randrw --ioengine=libaio --iodepth=16 --rw=randrw --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting --filename=/tmp/fio-test

Sequential read benchmark

sudo fio --name=seqread --ioengine=libaio --iodepth=32 --rw=read --bs=1M --direct=1 --size=4G --numjobs=1 --runtime=60 --group_reporting --filename=/tmp/fio-seq-test

Clean up test files

sudo rm -f /tmp/fio-*-test

Verify your setup

# Verify I/O scheduler
cat /sys/block/sda/queue/scheduler

Check mount options

mount | grep -E 'ext4|xfs|btrfs'

Verify kernel parameters

sysctl vm.dirty_ratio vm.swappiness vm.dirty_background_ratio

Check filesystem performance

iostat -x 1 3

Test readahead setting

sudo blockdev --getra /dev/sda

Common issues

SymptomCauseFix
High iowait timesWrong I/O scheduler or mount optionsSwitch to mq-deadline scheduler and add noatime mount option
Filesystem corruption after optimizationAggressive options like barrier=0Remove barrier=0, use data=ordered instead of writeback
Poor database performanceDefault mount optionsUse noatime, increase commit intervals, disable COW for Btrfs
System becomes unresponsive under I/O loadvm.dirty_ratio too highReduce vm.dirty_ratio to 10-15 and vm.dirty_background_ratio to 5
Mount fails after fstab changesInvalid mount optionsBoot from rescue mode, restore /etc/fstab.backup

Next steps

Automated install script

Run this to automate the entire setup

#filesystem #performance #io-scheduler #ext4 #xfs #iostat #mount-options

Need help?

Don't want to manage this yourself?

We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.

Talk to an engineer