Configure Linux memory cgroups v2 with systemd for advanced process isolation and resource control

Advanced 25 min Apr 17, 2026 16 views
Ubuntu 24.04 Ubuntu 22.04 Debian 12 AlmaLinux 9 Rocky Linux 9 Fedora 41

Set up cgroups v2 unified hierarchy with systemd to implement memory limits, isolation policies, and automated pressure responses for container workloads and system processes.

Prerequisites

  • Root or sudo access
  • systemd-based Linux distribution
  • Basic understanding of Linux process management

What this solves

Memory cgroups v2 with systemd provides fine-grained control over memory allocation, enabling process isolation, preventing memory exhaustion attacks, and implementing resource quotas. This unified hierarchy approach replaces the fragmented cgroups v1 system with a cleaner interface for container orchestration and multi-tenant environments.

Step-by-step configuration

Enable cgroups v2 unified hierarchy

Modern distributions may still default to cgroups v1. Enable the unified v2 hierarchy by modifying the kernel command line.

sudo grep -q "systemd.unified_cgroup_hierarchy=1" /proc/cmdline || echo "Enabling cgroups v2"
sudo sed -i 's/GRUB_CMDLINE_LINUX="\([^"]*\)"/GRUB_CMDLINE_LINUX="\1 systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all"/' /etc/default/grub

Update bootloader configuration

Apply the kernel parameter changes and reboot to activate cgroups v2.

sudo update-grub
sudo reboot
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
sudo reboot

Verify cgroups v2 activation

Confirm that the system is using the unified cgroups v2 hierarchy after reboot.

mount | grep cgroup2
cat /proc/cgroups | head -1
ls -la /sys/fs/cgroup/
Expected output: You should see cgroup2 mounted at /sys/fs/cgroup and files like memory.current, memory.max in the cgroup root.

Install memory pressure monitoring tools

Install utilities for monitoring memory usage and pressure events within cgroups.

sudo apt update
sudo apt install -y systemd-cgroup-utils procps htop stress-ng
sudo dnf install -y systemd procps-ng htop stress-ng

Create memory-limited systemd service

Configure a test service with memory limits to demonstrate cgroups v2 memory control.

[Unit]
Description=Memory Test Service
After=multi-user.target

[Service]
Type=simple
ExecStart=/usr/bin/stress-ng --vm 1 --vm-bytes 50M --timeout 300s
Restart=always
RestartSec=10
MemoryAccounting=yes
MemoryMax=100M
MemoryHigh=80M
MemorySwapMax=0
OOMPolicy=kill

[Install]
WantedBy=multi-user.target

Configure advanced memory limits

Create systemd override files for existing services to apply memory controls without modifying original service files.

sudo mkdir -p /etc/systemd/system/nginx.service.d
sudo tee /etc/systemd/system/nginx.service.d/memory-limits.conf
[Service]
MemoryAccounting=yes
MemoryMax=512M
MemoryHigh=400M
MemorySwapMax=0
OOMPolicy=continue

Set up user session memory limits

Configure memory limits for user sessions to prevent individual users from consuming excessive system memory.

sudo mkdir -p /etc/systemd/system/user@.service.d
sudo tee /etc/systemd/system/user@.service.d/memory-limits.conf << 'EOF'
[Service]
MemoryAccounting=yes
MemoryMax=2G
MemoryHigh=1.5G
Delegate=yes
EOF

Create memory pressure notification script

Implement automated responses to memory pressure events using systemd and cgroups v2 pressure stall information.

sudo tee /usr/local/bin/memory-pressure-handler.sh << 'EOF'
#!/bin/bash

CGROUP_PATH="/sys/fs/cgroup/system.slice"
PRESSURE_THRESHOLD=10.0
LOG_FILE="/var/log/memory-pressure.log"

log_event() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE"
}

check_memory_pressure() {
    if [[ -f "$CGROUP_PATH/memory.pressure" ]]; then
        local some_avg10=$(awk '/some/ {print $2}' "$CGROUP_PATH/memory.pressure" | cut -d'=' -f2)
        if (( $(echo "$some_avg10 > $PRESSURE_THRESHOLD" | bc -l) )); then
            log_event "High memory pressure detected: $some_avg10%"
            # Trigger cleanup actions
            systemctl reload nginx 2>/dev/null || true
            echo 3 > /proc/sys/vm/drop_caches
            return 1
        fi
    fi
    return 0
}

check_memory_pressure
EOF

sudo chmod +x /usr/local/bin/memory-pressure-handler.sh

Configure memory pressure monitoring service

Create a systemd timer to regularly check memory pressure and trigger automated responses.

[Unit]
Description=Memory Pressure Monitor
After=multi-user.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/memory-pressure-handler.sh
User=root
StandardOutput=journal
StandardError=journal
[Unit]
Description=Memory Pressure Monitor Timer
Requires=memory-pressure-monitor.service

[Timer]
OnBootSec=60
OnUnitActiveSec=30
Persistent=true

[Install]
WantedBy=timers.target

Enable memory controllers for container workloads

Configure cgroups delegation for container runtimes like Docker or Podman.

sudo mkdir -p /etc/systemd/system.conf.d
sudo tee /etc/systemd/system.conf.d/delegate.conf << 'EOF'
[Manager]
DefaultMemoryAccounting=yes
DefaultCPUAccounting=yes
DefaultIOAccounting=yes
EOF

Configure memory swap controls

Set up swap limitations and memory-only constraints for critical services.

sudo mkdir -p /etc/systemd/system/critical-app.service.d
sudo tee /etc/systemd/system/critical-app.service.d/memory-controls.conf << 'EOF'
[Service]
MemoryAccounting=yes
MemoryMax=1G
MemoryHigh=800M
MemorySwapMax=0
MemoryZSwapMax=0
OOMPolicy=kill
EOF

Reload systemd and enable services

Apply all configuration changes and start the monitoring services.

sudo systemctl daemon-reload
sudo systemctl enable --now memory-pressure-monitor.timer
sudo systemctl enable --now memory-test.service
sudo systemctl status memory-pressure-monitor.timer

Monitor memory cgroup usage

Use these commands to inspect memory usage and limits across your cgroups v2 hierarchy.

# View system-wide cgroup memory usage
sudo systemd-cgtop --iterations=1

Check specific service memory consumption

sudo systemctl status memory-test.service sudo cat /sys/fs/cgroup/system.slice/memory-test.service/memory.current sudo cat /sys/fs/cgroup/system.slice/memory-test.service/memory.max

Monitor memory pressure events

sudo cat /sys/fs/cgroup/system.slice/memory.pressure sudo journalctl -u memory-pressure-monitor.service -f

Advanced memory isolation policies

Implement sophisticated memory management for multi-tenant environments and container orchestration.

Create hierarchical memory limits

Set up nested cgroup limits for complex application stacks.

[Unit]
Description=Web Application Stack
After=slices.target

[Slice]
MemoryAccounting=yes
MemoryMax=4G
MemoryHigh=3G
CPUAccounting=yes
CPUQuota=200%
sudo systemctl daemon-reload
sudo systemctl enable web-stack.slice

Configure OOM killer policies

Customize out-of-memory handling for different service classes.

sudo mkdir -p /etc/systemd/system/database.service.d
sudo tee /etc/systemd/system/database.service.d/oom-policy.conf << 'EOF'
[Service]
OOMPolicy=continue
OOMScoreAdjust=-500
EOF

Set up memory reclaim policies

Configure proactive memory reclaim when approaching limits.

sudo tee /usr/local/bin/memory-reclaim.sh << 'EOF'
#!/bin/bash

CGROUP="$1"
THRESHOLD="$2"

if [[ -z "$CGROUP" || -z "$THRESHOLD" ]]; then
    echo "Usage: $0  "
    exit 1
fi

CURRENT=$(cat "$CGROUP/memory.current")
MAX=$(cat "$CGROUP/memory.max")

if [[ "$MAX" != "max" ]]; then
    USAGE_PERCENT=$(( CURRENT * 100 / MAX ))
    
    if [[ $USAGE_PERCENT -gt $THRESHOLD ]]; then
        echo 1 > "$CGROUP/memory.reclaim" 2>/dev/null || true
        logger "Memory reclaim triggered for $CGROUP at $USAGE_PERCENT% usage"
    fi
fi
EOF

sudo chmod +x /usr/local/bin/memory-reclaim.sh

Troubleshoot memory cgroup issues

Debug common memory limit and pressure problems with these diagnostic commands.

# Check for OOM kills in journals
sudo journalctl --since="1 hour ago" | grep -i "killed process\|out of memory\|oom"

Verify cgroup v2 features are available

cat /sys/fs/cgroup/cgroup.controllers cat /sys/fs/cgroup/cgroup.subtree_control

Monitor memory events

sudo cat /sys/fs/cgroup/system.slice/memory.events

Test memory pressure with stress tool

sudo systemd-run --uid=1000 --gid=1000 --property=MemoryMax=50M stress-ng --vm 1 --vm-bytes 100M --timeout 10s

Verify your setup

# Confirm cgroups v2 is active
mount | grep cgroup2
cat /proc/cmdline | grep -o "systemd.unified_cgroup_hierarchy=1"

Check memory accounting is enabled

sudo systemctl show memory-test.service | grep MemoryAccounting sudo systemctl show memory-test.service | grep MemoryMax

Verify pressure monitoring is running

sudo systemctl status memory-pressure-monitor.timer sudo systemctl list-timers | grep memory-pressure

Test memory limits are enforced

sudo systemctl status memory-test.service sudo systemd-cgtop --iterations=1

Common issues

SymptomCauseFix
cgroup2 not mountedStill using cgroups v1Add kernel parameters and reboot
Memory limits not enforcedMemoryAccounting disabledSet MemoryAccounting=yes in service
Services killed by OOMMemory limit too restrictiveIncrease MemoryMax or optimize application
Pressure events not triggeringMonitoring script permissionsEnsure script is executable and runs as root
Container memory limits ignoredNo delegation configuredEnable delegation in systemd configuration
Swap still used despite MemorySwapMax=0System swap not disabledCheck /proc/swaps and consider swapoff

Next steps

Running this in production?

Want this managed? Running memory controls at scale adds a second layer of work: capacity planning, pressure threshold tuning, and incident response when limits are breached. Our managed platform covers monitoring, alerting and 24/7 response for infrastructure like this.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed cloud infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.