Optimize Linux network stack performance with sysctl tuning and TCP congestion control

Intermediate 35 min Apr 03, 2026 28 views
Ubuntu 24.04 Ubuntu 22.04 Debian 12 AlmaLinux 9 Rocky Linux 9 Fedora 41

Learn how to optimize Linux network performance using sysctl kernel parameters, TCP BBR congestion control, and advanced buffer tuning. This guide covers baseline testing, monitoring, and production-grade configurations for high-throughput servers.

Prerequisites

  • Root or sudo access
  • Linux kernel 4.9+ for BBR support
  • Network testing tools (iperf3)
  • Basic understanding of TCP/IP concepts

What this solves

Linux network performance can be significantly improved through kernel parameter tuning and TCP congestion control optimization. This tutorial helps you configure sysctl parameters, enable TCP BBR congestion control, optimize network buffers, and monitor network performance metrics for high-throughput servers and bandwidth-intensive applications.

Prerequisites and system requirements

You need root access to modify kernel parameters and network stack configuration. Modern kernels (4.9+) support BBR congestion control, while older kernels use CUBIC by default.

Install network performance tools

Install essential tools for network performance testing and monitoring.

sudo apt update
sudo apt install -y iperf3 netstat-nat ss ethtool procps net-tools
sudo dnf update -y
sudo dnf install -y iperf3 net-tools ethtool procps-ng

Network performance baseline testing

Check current network configuration

Document your current network stack configuration before making changes.

cat /proc/sys/net/ipv4/tcp_congestion_control
cat /proc/sys/net/core/rmem_max
cat /proc/sys/net/core/wmem_max
ss -i | head -20

Test baseline network performance

Run iperf3 tests to establish baseline performance metrics before optimization.

# On the server (replace with your server IP)
iperf3 -s -p 5201

On the client (run from another machine)

iperf3 -c 203.0.113.10 -p 5201 -t 30 -P 4

Monitor network interface statistics

Check network interface statistics and identify potential bottlenecks.

ethtool -S eth0 | grep -E '(drop|error|fifo)'
cat /proc/net/dev
cat /proc/net/sockstat

TCP congestion control optimization

Check available congestion control algorithms

List available TCP congestion control algorithms supported by your kernel.

cat /proc/sys/net/ipv4/tcp_available_congestion_control
modprobe tcp_bbr
echo 'tcp_bbr' | sudo tee -a /etc/modules-load.d/modules.conf

Enable TCP BBR congestion control

Configure TCP BBR for better bandwidth utilization and reduced latency.

# TCP BBR congestion control
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

Enable BBR slow start after idle

net.ipv4.tcp_slow_start_after_idle = 0

Configure advanced TCP parameters

Optimize TCP window scaling, timestamps, and selective acknowledgments.

# TCP window scaling and timestamps
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

TCP keepalive settings

net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 60 net.ipv4.tcp_keepalive_probes = 3

Network buffer and queue optimization

Optimize socket buffer sizes

Configure kernel socket buffers for high-throughput network operations.

# Socket buffer sizes (16MB max)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 262144
net.core.wmem_default = 262144

TCP socket buffer sizes (min, default, max)

net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216

Configure network device queues

Optimize network device receive and transmit queues for better performance.

# Network device queue settings
net.core.netdev_max_backlog = 5000
net.core.netdev_budget = 600

TCP connection queue sizes

net.core.somaxconn = 1024 net.ipv4.tcp_max_syn_backlog = 2048

Enable TCP fast open and optimize connection handling

Configure TCP fast open and connection reuse for reduced latency.

# TCP Fast Open (client and server)
net.ipv4.tcp_fastopen = 3

TCP connection reuse and recycling

net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_fin_timeout = 10

TCP memory pressure settings

net.ipv4.tcp_mem = 786432 1048576 26777216 net.ipv4.tcp_max_tw_buckets = 360000

High-throughput server tuning

Configure TCP congestion window and pacing

Optimize TCP congestion window initialization and pacing for high-bandwidth networks.

# Initial congestion window
net.ipv4.tcp_init_cwnd = 10

TCP pacing (works with fq qdisc)

net.core.default_qdisc = fq_codel

TCP no metrics save

net.ipv4.tcp_no_metrics_save = 1

Optimize for high-concurrent connections

Configure kernel parameters for servers handling many simultaneous connections.

# IP port range for outbound connections
net.ipv4.ip_local_port_range = 1024 65535

File descriptor limits

fs.file-max = 2097152

Network security and performance

net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_rfc1337 = 1

Apply sysctl configuration

Load the new kernel parameters and verify they are active.

sudo sysctl -p /etc/sysctl.d/10-network-performance.conf
sudo sysctl net.ipv4.tcp_congestion_control
sudo sysctl net.core.default_qdisc

Configure network interface ring buffers

Optimize network interface ring buffer sizes for your network card.

# Check current ring buffer sizes
ethtool -g eth0

Increase ring buffer sizes (adjust based on your NIC)

sudo ethtool -G eth0 rx 4096 tx 4096

Make permanent by adding to network configuration

echo 'ethtool -G eth0 rx 4096 tx 4096' | sudo tee -a /etc/rc.local

Monitoring network performance metrics

Monitor TCP congestion control effectiveness

Check TCP congestion control statistics and connection metrics.

ss -i | grep -E '(cubic|bbr|reno)'
cat /proc/net/netstat | grep TcpExt
nstat -a | grep -E '(Tcp|Ip)' | head -20

Create network monitoring script

Set up automated monitoring of key network performance indicators.

#!/bin/bash

Network performance monitoring script

echo "=== Network Performance Report $(date) ===" echo "TCP Congestion Control:" cat /proc/sys/net/ipv4/tcp_congestion_control echo "Socket Statistics:" ss -s echo "TCP Connection States:" ss -tan state established | wc -l ss -tan state time-wait | wc -l echo "Network Interface Statistics:" ethtool -S eth0 | grep -E '(rx_bytes|tx_bytes|rx_dropped|tx_dropped)' echo "TCP Retransmissions:" nstat TcpRetransSegs | tail -1 echo "Buffer Usage:" cat /proc/net/sockstat

Make monitoring script executable

Set proper permissions and test the monitoring script.

sudo chmod 755 /usr/local/bin/network-monitor.sh
sudo /usr/local/bin/network-monitor.sh

Verify your setup

Test the network optimizations and verify improved performance.

# Verify TCP BBR is active
cat /proc/sys/net/ipv4/tcp_congestion_control

Check buffer sizes

sysctl net.core.rmem_max net.core.wmem_max

Test network performance (run iperf3 server first)

iperf3 -c 203.0.113.10 -p 5201 -t 30 -P 4

Monitor active connections

ss -tan state established | head -10

Check for network errors

ethtool -S eth0 | grep -E '(error|drop)' | grep -v ': 0'
Note: Performance improvements are most noticeable on high-latency or high-bandwidth networks. Local network tests may not show dramatic differences.

Performance testing and validation

Run comprehensive performance tests

Compare before and after performance using multiple test scenarios.

# Single connection test
iperf3 -c 203.0.113.10 -t 30

Multiple parallel connections

iperf3 -c 203.0.113.10 -t 30 -P 8

UDP bandwidth test

iperf3 -c 203.0.113.10 -u -b 1000M -t 30

Reverse test (server sends to client)

iperf3 -c 203.0.113.10 -R -t 30

For advanced performance monitoring integration, see our guide on configuring Linux performance monitoring with collectd and InfluxDB. For complementary I/O optimizations, check our tutorial on optimizing Linux I/O performance.

Common issues

SymptomCauseFix
BBR not availableKernel version too oldUpgrade to kernel 4.9+ or use CUBIC with optimized parameters
No performance improvementNetwork not bandwidth-limitedTest on high-latency or congested networks
Connection timeoutsAggressive timeout settingsIncrease tcp_fin_timeout and keepalive values
High memory usageLarge buffer sizesReduce rmem_max and wmem_max values by 50%
Packet dropsSmall ring buffersIncrease NIC ring buffer sizes with ethtool -G
sysctl changes not persistentConfiguration not in sysctl.dEnsure config is in /etc/sysctl.d/ and run sysctl -p

Next steps

Automated install script

Run this to automate the entire setup

#sysctl #tcp-bbr #network-performance #bandwidth-optimization #tcp-congestion-control

Need help?

Don't want to manage this yourself?

We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.

Talk to an engineer