Setup Grafana alerting with Slack and Microsoft Teams integration

Intermediate 25 min Apr 18, 2026 21 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Configure Grafana's unified alerting system to send notifications to Slack and Microsoft Teams. Set up alert rules, notification policies, and webhook integrations for comprehensive monitoring coverage.

Prerequisites

  • Grafana instance with admin access
  • Slack workspace admin permissions
  • Microsoft Teams channel access
  • At least one configured data source in Grafana

What this solves

Grafana's unified alerting system lets you create sophisticated alert rules and route notifications to multiple channels. This tutorial shows you how to set up Slack and Microsoft Teams integrations so your team gets notified when metrics cross thresholds or systems go down.

Prerequisites

You need a working Grafana instance with admin access and at least one data source configured. If you don't have Grafana set up yet, check our Grafana installation guide.

Step-by-step configuration

Create Slack webhook URL

First, create an incoming webhook in your Slack workspace to receive alerts from Grafana.

  1. Go to https://api.slack.com/apps and click "Create New App"
  2. Select "From scratch" and name your app "Grafana Alerts"
  3. Choose your workspace and click "Create App"
  4. In the left sidebar, click "Incoming Webhooks"
  5. Toggle "Activate Incoming Webhooks" to On
  6. Click "Add New Webhook to Workspace"
  7. Select the channel where you want alerts and click "Allow"
  8. Copy the webhook URL that starts with https://hooks.slack.com/services/

Create Microsoft Teams webhook URL

Set up an incoming webhook connector in Microsoft Teams to receive Grafana notifications.

  1. Open Microsoft Teams and navigate to your target channel
  2. Click the three dots next to the channel name
  3. Select "Connectors"
  4. Find "Incoming Webhook" and click "Configure"
  5. Name it "Grafana Alerts" and optionally upload an icon
  6. Click "Create"
  7. Copy the webhook URL that ends with @outlook.office.com
  8. Click "Done"

Configure Slack contact point in Grafana

Create a contact point that defines how Grafana sends alerts to Slack.

  1. Log into Grafana as an admin user
  2. Navigate to "Alerting" → "Contact points"
  3. Click "Add contact point"
  4. Set the name to "slack-alerts"
  5. From the "Contact point type" dropdown, select "Slack"
  6. Paste your Slack webhook URL in the "Webhook URL" field
  7. Set the username to "Grafana"
  8. In the "Title" field, enter: {{ .GroupLabels.alertname }} - {{ .Status | title }}
  9. In the "Text" field, enter:
{{ range .Alerts }}
Alert: {{ .Annotations.summary }}
Description: {{ .Annotations.description }}
Status: {{ .Status }}
Labels: {{ range .Labels.SortedPairs }}{{ .Name }}: {{ .Value }} {{ end }}
{{ end }}
  • Click "Test" to send a test notification
  • Click "Save contact point"
  • Configure Teams contact point in Grafana

    Create a second contact point for Microsoft Teams notifications.

    1. In Grafana, go to "Alerting" → "Contact points"
    2. Click "Add contact point"
    3. Set the name to "teams-alerts"
    4. From the "Contact point type" dropdown, select "Microsoft Teams"
    5. Paste your Teams webhook URL in the "Webhook URL" field
    6. In the "Title" field, enter: {{ .GroupLabels.alertname }} Alert
    7. In the "Message" field, enter:
    {{ range .Alerts }}
    Summary: {{ .Annotations.summary }}  
    Description: {{ .Annotations.description }}  
    Status: {{ .Status }}  
    Severity: {{ .Labels.severity }}  
    Time: {{ .StartsAt.Format "2006-01-02 15:04:05" }}  
    {{ end }}
  • Click "Test" to verify the Teams integration
  • Click "Save contact point"
  • Create notification policy

    Set up a notification policy that determines which alerts get sent to which contact points.

    1. Navigate to "Alerting" → "Notification policies"
    2. Click "Edit" on the default policy
    3. Under "Default contact point", select "slack-alerts"
    4. Click "Save policy"
    5. Click "Add nested policy"
    6. Add a matcher: Label "severity" = "critical"
    7. Set contact points to both "slack-alerts" and "teams-alerts"
    8. Set "Group by" to "alertname"
    9. Set "Group wait" to "10s"
    10. Set "Group interval" to "5m"
    11. Set "Repeat interval" to "12h"
    12. Click "Save policy"

    Create a test alert rule

    Create a simple alert rule to test your notification setup.

    1. Go to "Alerting" → "Alert rules"
    2. Click "Create alert rule"
    3. Set the rule name to "High CPU Usage"
    4. In query A, select your Prometheus data source
    5. Enter the query: 100 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
    6. Click "Set as alert condition" for query A
    7. Set the condition to "IS ABOVE" and threshold to "80"
    8. Set evaluation to "1m" for both "For" and "Evaluate every"
    9. Add folder "Test Alerts"
    10. Set evaluation group to "cpu-alerts"
    11. Add annotations:
    • summary: High CPU usage detected
    • description: CPU usage is above 80% on {{ $labels.instance }}
    1. Add labels:
    • severity: warning
    • team: infrastructure
    1. Click "Save rule and exit"

    Create a critical alert rule

    Create a critical severity alert to test the nested notification policy.

    1. Click "Create alert rule" again
    2. Set the rule name to "Service Down"
    3. In query A, enter: up{job="node"} == 0
    4. Click "Set as alert condition" for query A
    5. Set condition to "IS EQUAL TO" and threshold to "1"
    6. Set evaluation to "30s" for both fields
    7. Use folder "Test Alerts" and group "service-health"
    8. Add annotations:
    • summary: Service is down
    • description: {{ $labels.job }} on {{ $labels.instance }} is not responding
    1. Add labels:
    • severity: critical
    • team: infrastructure
    1. Click "Save rule and exit"

    Verify your setup

    Test your alerting configuration to ensure notifications are working correctly.

    # View alert rules in Grafana UI
    

    Go to Alerting → Alert rules

    Check the State column shows "Normal" or "Pending"

    Test contact points

    Go to Alerting → Contact points

    Click "Edit" on each contact point

    Click "Test" button to send test notification

    You should receive test notifications in both Slack and Teams. Check the Grafana logs if notifications fail:

    sudo journalctl -u grafana-server -f
    sudo journalctl -u grafana-server -f

    Advanced notification templates

    Enhanced Slack template with colors

    Improve your Slack notifications with color coding and better formatting.

    {
      "attachments": [
        {
          "color": "{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}",
          "title": "{{ .GroupLabels.alertname }} - {{ .Status | title }}",
          "text": "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ .Annotations.description }}{{ end }}",
          "fields": [
            {
              "title": "Severity",
              "value": "{{ .GroupLabels.severity }}",
              "short": true
            },
            {
              "title": "Instance",
              "value": "{{ .GroupLabels.instance }}",
              "short": true
            }
          ],
          "footer": "Grafana",
          "ts": {{ .GroupLabels.timestamp }}
        }
      ]
    }

    Rich Teams template with adaptive cards

    Use Microsoft Teams adaptive cards for better visual presentation.

    {
      "@type": "MessageCard",
      "@context": "https://schema.org/extensions",
      "summary": "{{ .GroupLabels.alertname }} Alert",
      "themeColor": "{{ if eq .Status "firing" }}FF0000{{ else }}00FF00{{ end }}",
      "title": "🚨 {{ .GroupLabels.alertname }}",
      "text": "{{ range .Alerts }}{{ .Annotations.description }}{{ end }}",
      "sections": [
        {
          "facts": [
            {
              "name": "Status",
              "value": "{{ .Status }}"
            },
            {
              "name": "Severity",
              "value": "{{ .GroupLabels.severity }}"
            },
            {
              "name": "Instance",
              "value": "{{ .GroupLabels.instance }}"
            },
            {
              "name": "Started",
              "value": "{{ .GroupLabels.startsAt }}"
            }
          ]
        }
      ],
      "potentialAction": [
        {
          "@type": "OpenUri",
          "name": "View in Grafana",
          "targets": [
            {
              "os": "default",
              "uri": "https://your-grafana-url/alerting/list"
            }
          ]
        }
      ]
    }

    Troubleshooting webhooks

    Note: Webhook failures are often caused by network connectivity or invalid URLs. Test webhooks manually first.

    Test Slack webhook manually

    Verify your Slack webhook works outside of Grafana.

    curl -X POST -H 'Content-type: application/json' \
      --data '{"text":"Test message from Grafana"}' \
      YOUR_SLACK_WEBHOOK_URL

    Test Teams webhook manually

    Check your Teams webhook with a direct HTTP request.

    curl -X POST -H 'Content-Type: application/json' \
      --data '{
        "@type": "MessageCard",
        "@context": "https://schema.org/extensions",
        "summary": "Test Alert",
        "text": "This is a test message from Grafana"
      }' \
      YOUR_TEAMS_WEBHOOK_URL

    Configure alert silencing

    Create silence rules

    Set up maintenance windows to suppress alerts during planned downtime.

    1. Go to "Alerting" → "Silences"
    2. Click "Add silence"
    3. Add matchers for the alerts you want to silence:
      • alertname = "High CPU Usage"
      • instance = "server01:9100"
    4. Set start time to now and duration to "2h"
    5. Add comment: "Planned maintenance window"
    6. Click "Create"

    Monitor alert delivery

    Track notification delivery success in Grafana's built-in metrics. If you have Prometheus monitoring Grafana, check these metrics for alerting health:

    # Total notifications sent
    grafana_alerting_notifications_sent_total
    
    

    Notification failures

    grafana_alerting_notifications_failed_total

    Alert rule evaluation duration

    grafana_alerting_rule_evaluation_duration_seconds

    Active alert rules

    grafana_alerting_active_alert_rules

    Create dashboard panels with these metrics to monitor your alerting system's health. You can also set up alerts on these metrics to get notified if your notification system fails.

    Common issues

    SymptomCauseFix
    Slack notifications not receivedInvalid webhook URL or disabled appRegenerate webhook URL in Slack app settings
    Teams notifications failWebhook expired or channel deletedRecreate incoming webhook connector in Teams
    Alert rules always firingIncorrect query or thresholdTest query in Explore view, adjust threshold values
    No notifications despite firing alertsNotification policy not matchingCheck label matchers in notification policies
    Too many notificationsShort group interval or no groupingIncrease group interval to 10m, group by alertname
    Template errors in messagesInvalid Go template syntaxTest templates with simple variables first
    Webhook timeoutsNetwork connectivity or slow responseCheck firewall rules, test webhook URLs manually

    Best practices for production alerting

    Follow these practices when deploying Grafana alerting in production environments:

    • Use different channels for different severities: Send critical alerts to incident response channels, warnings to monitoring channels
    • Implement escalation policies: Create nested policies that escalate to management after a certain time
    • Group related alerts: Use appropriate grouping to avoid notification storms during widespread issues
    • Set meaningful repeat intervals: Balance between awareness and alert fatigue (12-24 hours for most alerts)
    • Test your contact points regularly: Set up automated tests to ensure webhooks remain functional
    • Monitor your monitoring: Alert on Grafana metrics to detect alerting system failures
    • Document runbooks: Include links to troubleshooting steps in alert descriptions

    For complex environments, consider using Prometheus Alertmanager for more advanced routing and inhibition rules.

    Next steps

    Running this in production?

    Want this handled for you? Setting this up once is straightforward. Keeping it patched, monitored, backed up and performant across environments is the harder part. See how we run infrastructure like this for European teams.

    Need help?

    Don't want to manage this yourself?

    We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.