Configure Prometheus Alertmanager for email and Slack notifications with webhook integration

Intermediate 25 min Apr 18, 2026 134 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up Prometheus Alertmanager to send critical alerts via email and Slack channels with custom webhook integration. This tutorial covers installation, SMTP configuration, routing rules, and alert notification testing.

Prerequisites

  • Prometheus server installed and running
  • SMTP server access for email notifications
  • Slack workspace with webhook permissions
  • Basic familiarity with YAML configuration

What this solves

Prometheus Alertmanager centralizes alert routing and notification management for your monitoring infrastructure. Without Alertmanager, Prometheus can only store alert states but cannot notify your team when critical issues occur. This tutorial configures email notifications via SMTP, Slack integration with webhooks, and custom routing rules to ensure the right alerts reach the right people at the right time.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you get the latest versions of dependencies.

sudo apt update && sudo apt upgrade -y
sudo apt install -y wget curl
sudo dnf update -y
sudo dnf install -y wget curl

Create Alertmanager user

Create a dedicated system user to run Alertmanager securely without shell access.

sudo useradd --no-create-home --shell /bin/false alertmanager

Download and install Alertmanager

Download the latest Alertmanager binary from the official Prometheus releases.

cd /tmp
wget https://github.com/prometheus/alertmanager/releases/download/v0.27.0/alertmanager-0.27.0.linux-amd64.tar.gz
tar xzf alertmanager-0.27.0.linux-amd64.tar.gz
sudo cp alertmanager-0.27.0.linux-amd64/alertmanager /usr/local/bin/
sudo cp alertmanager-0.27.0.linux-amd64/amtool /usr/local/bin/
sudo chown alertmanager:alertmanager /usr/local/bin/alertmanager /usr/local/bin/amtool

Create directories and set permissions

Set up the required directories with proper ownership and permissions for Alertmanager.

sudo mkdir -p /etc/alertmanager /var/lib/alertmanager
sudo chown alertmanager:alertmanager /etc/alertmanager /var/lib/alertmanager
sudo chmod 755 /etc/alertmanager /var/lib/alertmanager

Create Alertmanager configuration

Configure Alertmanager with email SMTP settings, Slack webhook, and routing rules. Replace the webhook URL and email settings with your actual values.

global:
  smtp_smarthost: 'smtp.gmail.com:587'
  smtp_from: 'alerts@example.com'
  smtp_auth_username: 'alerts@example.com'
  smtp_auth_password: 'your-app-password'
  smtp_require_tls: true

route:
  group_by: ['alertname', 'cluster', 'service']
  group_wait: 10s
  group_interval: 10s
  repeat_interval: 1h
  receiver: 'default-receiver'
  routes:
    - match:
        severity: critical
      receiver: 'critical-alerts'
    - match:
        severity: warning
      receiver: 'warning-alerts'
    - match:
        alertname: 'InstanceDown'
      receiver: 'instance-down-alerts'

receivers:
  - name: 'default-receiver'
    email_configs:
      - to: 'admin@example.com'
        subject: 'Prometheus Alert: {{ .GroupLabels.alertname }}'
        body: |
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Description: {{ .Annotations.description }}
          Labels: {{ .Labels }}
          {{ end }}

  - name: 'critical-alerts'
    email_configs:
      - to: 'oncall@example.com'
        subject: 'CRITICAL: {{ .GroupLabels.alertname }}'
        body: |
          CRITICAL ALERT TRIGGERED
          
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Description: {{ .Annotations.description }}
          Severity: {{ .Labels.severity }}
          Instance: {{ .Labels.instance }}
          Started: {{ .StartsAt }}
          {{ end }}
    slack_configs:
      - api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
        channel: '#alerts'
        title: 'Critical Alert: {{ .GroupLabels.alertname }}'
        text: |
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Description: {{ .Annotations.description }}
          Severity: {{ .Labels.severity }}
          Instance: {{ .Labels.instance }}
          {{ end }}
        send_resolved: true

  - name: 'warning-alerts'
    email_configs:
      - to: 'team@example.com'
        subject: 'Warning: {{ .GroupLabels.alertname }}'
        body: |
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Description: {{ .Annotations.description }}
          Labels: {{ .Labels }}
          {{ end }}
    slack_configs:
      - api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
        channel: '#monitoring'
        title: 'Warning: {{ .GroupLabels.alertname }}'
        text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}'
        send_resolved: true

  - name: 'instance-down-alerts'
    email_configs:
      - to: 'infrastructure@example.com'
        subject: 'Instance Down: {{ .GroupLabels.instance }}'
        body: |
          SERVER DOWN ALERT
          
          {{ range .Alerts }}
          Instance: {{ .Labels.instance }}
          Job: {{ .Labels.job }}
          Started: {{ .StartsAt }}
          {{ end }}
    slack_configs:
      - api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
        channel: '#infrastructure'
        title: 'Server Down: {{ .GroupLabels.instance }}'
        text: 'Instance {{ .GroupLabels.instance }} is unreachable'
        send_resolved: true

inhibit_rules:
  - source_match:
      severity: 'critical'
    target_match:
      severity: 'warning'
    equal: ['alertname', 'cluster', 'service']

Set configuration file permissions

Secure the configuration file since it contains sensitive SMTP credentials.

sudo chown alertmanager:alertmanager /etc/alertmanager/alertmanager.yml
sudo chmod 640 /etc/alertmanager/alertmanager.yml

Create systemd service file

Set up Alertmanager as a systemd service for automatic startup and management.

[Unit]
Description=Alertmanager
Wants=network-online.target
After=network-online.target

[Service]
User=alertmanager
Group=alertmanager
Type=simple
WorkingDirectory=/var/lib/alertmanager
ExecStart=/usr/local/bin/alertmanager \
  --config.file=/etc/alertmanager/alertmanager.yml \
  --storage.path=/var/lib/alertmanager \
  --web.listen-address=0.0.0.0:9093 \
  --cluster.listen-address=0.0.0.0:9094 \
  --log.level=info

Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target

Enable and start Alertmanager

Enable the service to start on boot and launch Alertmanager.

sudo systemctl daemon-reload
sudo systemctl enable --now alertmanager
sudo systemctl status alertmanager

Configure firewall

Open the necessary ports for Alertmanager web UI and cluster communication.

sudo ufw allow 9093/tcp
sudo ufw allow 9094/tcp
sudo ufw reload
sudo firewall-cmd --permanent --add-port=9093/tcp
sudo firewall-cmd --permanent --add-port=9094/tcp
sudo firewall-cmd --reload

Configure Prometheus to use Alertmanager

Update your Prometheus configuration to send alerts to Alertmanager. Add this to your existing Prometheus config.

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - localhost:9093

rule_files:
  - "/etc/prometheus/alert_rules.yml"

Create sample alerting rules

Create basic alerting rules for Prometheus to trigger notifications through Alertmanager.

groups:
  - name: basic-alerts
    rules:
    - alert: InstanceDown
      expr: up == 0
      for: 1m
      labels:
        severity: critical
      annotations:
        summary: "Instance {{ $labels.instance }} is down"
        description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minute."

    - alert: HighCpuUsage
      expr: 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
      for: 2m
      labels:
        severity: warning
      annotations:
        summary: "High CPU usage on {{ $labels.instance }}"
        description: "CPU usage is above 80% on {{ $labels.instance }} for more than 2 minutes."

    - alert: HighMemoryUsage
      expr: (1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100 > 90
      for: 2m
      labels:
        severity: critical
      annotations:
        summary: "High memory usage on {{ $labels.instance }}"
        description: "Memory usage is above 90% on {{ $labels.instance }} for more than 2 minutes."

    - alert: DiskSpaceLow
      expr: (1 - (node_filesystem_avail_bytes{fstype!="tmpfs"} / node_filesystem_size_bytes{fstype!="tmpfs"})) * 100 > 85
      for: 1m
      labels:
        severity: warning
      annotations:
        summary: "Low disk space on {{ $labels.instance }}"
        description: "Disk usage is above 85% on {{ $labels.instance }} {{ $labels.mountpoint }}."

    - alert: ServiceDown
      expr: up{job="node"} == 0
      for: 30s
      labels:
        severity: critical
      annotations:
        summary: "Service {{ $labels.job }} is down"
        description: "Service {{ $labels.job }} on {{ $labels.instance }} is down for more than 30 seconds."

Restart Prometheus to load new configuration

Reload Prometheus configuration to activate the new alerting rules and Alertmanager integration.

sudo systemctl restart prometheus
sudo systemctl status prometheus

Configure Slack webhook integration

Create Slack webhook URL

Go to your Slack workspace settings and create an Incoming Webhook. Navigate to Apps > Incoming Webhooks > Add to Slack. Select the channel where you want alerts and copy the webhook URL.

Test Slack notification

Send a test alert to verify your Slack integration works correctly.

curl -X POST 'http://localhost:9093/api/v1/alerts' \
-H 'Content-Type: application/json' \
-d '[{
  "labels": {
    "alertname": "TestAlert",
    "severity": "critical",
    "instance": "test-server"
  },
  "annotations": {
    "summary": "This is a test alert",
    "description": "Testing Alertmanager Slack integration"
  },
  "generatorURL": "http://localhost:9090/graph"
}]'

Configure email notifications

Set up Gmail App Password

If using Gmail, create an App Password for SMTP authentication. Go to Google Account settings > Security > 2-Step Verification > App passwords. Generate a password and use it in the Alertmanager config.

Test email notification

Send a test email alert to verify SMTP configuration works.

curl -X POST 'http://localhost:9093/api/v1/alerts' \
-H 'Content-Type: application/json' \
-d '[{
  "labels": {
    "alertname": "EmailTest",
    "severity": "warning",
    "instance": "mail-test-server"
  },
  "annotations": {
    "summary": "Email notification test",
    "description": "Testing Alertmanager email integration via SMTP"
  },
  "generatorURL": "http://localhost:9090/graph"
}]'

Advanced webhook integration

Create custom webhook receiver

Add a custom webhook receiver to your Alertmanager configuration for integration with external systems.

  - name: 'webhook-alerts'
    webhook_configs:
      - url: 'http://your-webhook-endpoint.com/alerts'
        http_config:
          basic_auth:
            username: 'webhook-user'
            password: 'webhook-password'
        send_resolved: true
        max_alerts: 10
        title: 'Prometheus Alert'
        text: |
          {{ range .Alerts }}
          Alert: {{ .Annotations.summary }}
          Status: {{ .Status }}
          Labels: {{ .Labels }}
          {{ end }}

Add webhook routing rule

Add a routing rule to send specific alerts to your webhook endpoint.

    - match:
        service: 'database'
      receiver: 'webhook-alerts'

Reload Alertmanager configuration

Apply the updated configuration with webhook integration.

sudo systemctl reload alertmanager
sudo systemctl status alertmanager

Verify your setup

# Check Alertmanager status
sudo systemctl status alertmanager

Verify web UI is accessible

curl -I http://localhost:9093

Check current alerts

curl http://localhost:9093/api/v1/alerts

Validate configuration

/usr/local/bin/alertmanager --config.file=/etc/alertmanager/alertmanager.yml --config.check

View logs for troubleshooting

sudo journalctl -u alertmanager -f

Check Prometheus integration

curl http://localhost:9090/api/v1/alertmanagers

Access the Alertmanager web UI at http://your-server-ip:9093 to view active alerts, silences, and configuration status.

Common issues

SymptomCauseFix
Email notifications not sentSMTP authentication failedVerify credentials and use App Password for Gmail
Slack notifications missingInvalid webhook URLCheck webhook URL and channel permissions
Alertmanager won't startConfiguration syntax errorRun /usr/local/bin/alertmanager --config.check
No alerts showingPrometheus not connectedCheck Prometheus alerting config and restart service
Permission denied errorsIncorrect file ownershipRun sudo chown -R alertmanager:alertmanager /etc/alertmanager
Web UI not accessibleFirewall blocking portOpen port 9093 in firewall configuration
Never use chmod 777. It gives every user on the system full access to your files. Alertmanager config contains sensitive credentials, so use chmod 640 and proper ownership with chown.

Next steps

Running this in production?

Want this handled for you? Setting this up once is straightforward. Keeping it patched, monitored, backed up and performant across environments is the harder part. See how we run infrastructure like this for European teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.