Configure Prometheus Alertmanager with email notifications for production monitoring

Intermediate 35 min Apr 17, 2026
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up Prometheus Alertmanager to send email notifications when your systems trigger alerts. This tutorial covers SMTP configuration, alert routing rules, and email template customization for production monitoring workflows.

Prerequisites

  • Prometheus server installed and running
  • SMTP server credentials (Gmail, SendGrid, or corporate mail server)
  • Root or sudo access
  • Basic understanding of YAML configuration

What this solves

Prometheus Alertmanager receives alerts from Prometheus and routes them to notification channels like email, Slack, or PagerDuty. Email notifications let your operations team respond to system issues without constantly watching dashboards. This setup is essential for production environments where you need reliable alerting for service outages, resource exhaustion, or performance degradation.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you have the latest security patches and repository information.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Create alertmanager user and directories

Create a dedicated system user for Alertmanager and the required directory structure for configuration and data storage.

sudo useradd --no-create-home --shell /bin/false alertmanager
sudo mkdir -p /etc/alertmanager /var/lib/alertmanager
sudo chown -R alertmanager:alertmanager /etc/alertmanager /var/lib/alertmanager

Download and install Alertmanager

Download the latest Alertmanager binary from the official releases and install it to the system PATH.

cd /tmp
wget https://github.com/prometheus/alertmanager/releases/download/v0.27.0/alertmanager-0.27.0.linux-amd64.tar.gz
tar xzf alertmanager-0.27.0.linux-amd64.tar.gz
sudo cp alertmanager-0.27.0.linux-amd64/alertmanager /usr/local/bin/
sudo cp alertmanager-0.27.0.linux-amd64/amtool /usr/local/bin/
sudo chown alertmanager:alertmanager /usr/local/bin/alertmanager /usr/local/bin/amtool
sudo chmod +x /usr/local/bin/alertmanager /usr/local/bin/amtool

Configure Alertmanager with email settings

Create the main Alertmanager configuration file with SMTP settings for email notifications. Replace the SMTP details with your mail server configuration.

global:
  smtp_smarthost: 'smtp.gmail.com:587'
  smtp_from: 'alerts@example.com'
  smtp_auth_username: 'alerts@example.com'
  smtp_auth_password: 'your-app-password'
  smtp_require_tls: true

route:
  group_by: ['alertname']
  group_wait: 10s
  group_interval: 10s
  repeat_interval: 1h
  receiver: 'web.hook'

receivers:
  - name: 'web.hook'
    email_configs:
      - to: 'admin@example.com'
        subject: 'Alert: {{ .GroupLabels.alertname }}'
        body: |
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Description: {{ .Annotations.description }}
          Labels: {{ .Labels }}
          Started: {{ .StartsAt }}
          {{ end }}

inhibit_rules:
  - source_match:
      severity: 'critical'
    target_match:
      severity: 'warning'
    equal: ['alertname', 'dev', 'instance']

Set proper file permissions

Ensure the alertmanager user can read the configuration file while keeping credentials secure from other users.

sudo chown -R alertmanager:alertmanager /etc/alertmanager
sudo chmod 640 /etc/alertmanager/alertmanager.yml
Never use chmod 777. Configuration files contain sensitive SMTP credentials. Use 640 permissions so only the alertmanager user and root can read them.

Create systemd service file

Set up a systemd service to manage Alertmanager as a system daemon with proper resource limits and security settings.

[Unit]
Description=Alertmanager
Wants=network-online.target
After=network-online.target

[Service]
User=alertmanager
Group=alertmanager
Type=simple
ExecStart=/usr/local/bin/alertmanager \
  --config.file=/etc/alertmanager/alertmanager.yml \
  --storage.path=/var/lib/alertmanager/ \
  --web.console.templates=/etc/alertmanager/consoles \
  --web.console.libraries=/etc/alertmanager/console_libraries \
  --web.listen-address=0.0.0.0:9093 \
  --cluster.listen-address=0.0.0.0:9094

Restart=always
RestartSec=5

Security settings

NoNewPrivileges=yes ProtectSystem=strict ProtectHome=yes ReadWritePaths=/var/lib/alertmanager [Install] WantedBy=multi-user.target

Enable and start Alertmanager

Reload systemd configuration, enable Alertmanager to start on boot, and start the service.

sudo systemctl daemon-reload
sudo systemctl enable alertmanager
sudo systemctl start alertmanager
sudo systemctl status alertmanager

Configure firewall for Alertmanager

Open the required ports for Alertmanager web interface and cluster communication.

sudo ufw allow 9093/tcp comment 'Alertmanager web interface'
sudo ufw allow 9094/tcp comment 'Alertmanager cluster'
sudo firewall-cmd --permanent --add-port=9093/tcp
sudo firewall-cmd --permanent --add-port=9094/tcp
sudo firewall-cmd --reload

Configure Prometheus to send alerts

Update your Prometheus configuration to send alerts to Alertmanager. Add the alerting section to your existing Prometheus config.

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - localhost:9093

Load rules once and periodically evaluate them

rule_files: - "/etc/prometheus/alerts.yml"

Existing scrape_configs section remains unchanged

scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090']

Create sample alert rules

Define alert rules that trigger when system resources exceed thresholds. These examples monitor high CPU usage, memory consumption, and disk space.

groups:
  • name: system_alerts
rules: - alert: HighCPUUsage expr: 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80 for: 5m labels: severity: warning annotations: summary: "High CPU usage detected on {{ $labels.instance }}" description: "CPU usage is above 80% for more than 5 minutes on {{ $labels.instance }}" - alert: HighMemoryUsage expr: (1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100 > 90 for: 5m labels: severity: critical annotations: summary: "High memory usage on {{ $labels.instance }}" description: "Memory usage is above 90% on {{ $labels.instance }}" - alert: DiskSpaceLow expr: (1 - (node_filesystem_avail_bytes{fstype!="tmpfs"} / node_filesystem_size_bytes{fstype!="tmpfs"})) * 100 > 85 for: 10m labels: severity: warning annotations: summary: "Disk space low on {{ $labels.instance }}" description: "Disk space usage is above 85% on {{ $labels.instance }} mount {{ $labels.mountpoint }}" - alert: ServiceDown expr: up == 0 for: 1m labels: severity: critical annotations: summary: "Service {{ $labels.job }} is down" description: "Service {{ $labels.job }} on {{ $labels.instance }} has been down for more than 1 minute"

Set alert rules file permissions

Ensure Prometheus can read the alert rules file with proper ownership and permissions.

sudo chown prometheus:prometheus /etc/prometheus/alerts.yml
sudo chmod 644 /etc/prometheus/alerts.yml

Restart Prometheus to load new configuration

Restart Prometheus to apply the alerting configuration and load the new alert rules.

sudo systemctl restart prometheus
sudo systemctl status prometheus

Configure advanced email routing

Create routing rules for different alert severities

Configure different email recipients based on alert severity levels. Critical alerts go to on-call engineers while warnings go to the general team.

global:
  smtp_smarthost: 'smtp.gmail.com:587'
  smtp_from: 'alerts@example.com'
  smtp_auth_username: 'alerts@example.com'
  smtp_auth_password: 'your-app-password'
  smtp_require_tls: true

route:
  group_by: ['alertname']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 12h
  receiver: 'default'
  routes:
    - match:
        severity: critical
      receiver: 'critical-alerts'
      group_wait: 10s
      repeat_interval: 5m
    - match:
        severity: warning
      receiver: 'warning-alerts'
      repeat_interval: 24h

receivers:
  - name: 'default'
    email_configs:
      - to: 'team@example.com'
        subject: 'Alert: {{ .GroupLabels.alertname }}'

  - name: 'critical-alerts'
    email_configs:
      - to: 'oncall@example.com,admin@example.com'
        subject: '[CRITICAL] {{ .GroupLabels.alertname }}'
        body: |
          CRITICAL ALERT TRIGGERED
          
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Description: {{ .Annotations.description }}
          Severity: {{ .Labels.severity }}
          Instance: {{ .Labels.instance }}
          Started: {{ .StartsAt }}
          
          {{ end }}
          
          Please investigate immediately.

  - name: 'warning-alerts'
    email_configs:
      - to: 'team@example.com'
        subject: '[WARNING] {{ .GroupLabels.alertname }}'
        body: |
          Warning Alert
          
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Description: {{ .Annotations.description }}
          Instance: {{ .Labels.instance }}
          Started: {{ .StartsAt }}
          {{ end }}

inhibit_rules:
  - source_match:
      severity: 'critical'
    target_match:
      severity: 'warning'
    equal: ['alertname', 'instance']

Reload Alertmanager configuration

Apply the new routing configuration without stopping the service using the reload endpoint.

sudo systemctl reload alertmanager
curl -X POST http://localhost:9093/-/reload

Test email notifications

Send test alert using amtool

Use the amtool command-line utility to send a test alert and verify email delivery works correctly.

amtool alert add alertname=TestAlert severity=warning instance=localhost summary="This is a test alert" description="Testing email notifications from Alertmanager"

Check alert status in web interface

Open the Alertmanager web interface to view active alerts and confirm your test alert appears.

curl -s http://localhost:9093/api/v1/alerts | python3 -m json.tool

You can also visit http://your-server-ip:9093 in a web browser to see the Alertmanager dashboard.

Verify your setup

sudo systemctl status alertmanager
curl -s http://localhost:9093/-/healthy
amtool config show --alertmanager.url=http://localhost:9093
prometheus --config.file=/etc/prometheus/prometheus.yml --dry-run

Check that Prometheus can reach Alertmanager by visiting http://your-server-ip:9090/status/targets and confirming the alertmanager target shows as "UP".

Common issues

SymptomCauseFix
Emails not sendingSMTP authentication failedCheck SMTP credentials and use app-specific passwords for Gmail
Alertmanager won't startConfiguration syntax errorRun amtool config show to validate YAML syntax
Alerts not reaching AlertmanagerPrometheus can't connectCheck firewall rules and verify Alertmanager is listening on 9093
Permission denied errorsIncorrect file ownershipRun sudo chown -R alertmanager:alertmanager /etc/alertmanager
Too many duplicate emailsGroup interval too shortIncrease group_interval and repeat_interval in routing config
Missing alert rulesPrometheus config not updatedAdd rule_files section to prometheus.yml and restart service

Next steps

Running this in production?

Want this handled for you? Setting up Alertmanager once is straightforward. Keeping it patched, monitored, backed up and tuned across environments is the harder part. See how we run infrastructure like this for European SaaS and e-commerce teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.