Configure Grafana's unified alerting system to send notifications to Slack and Microsoft Teams. Set up alert rules, notification policies, and webhook integrations for comprehensive monitoring coverage.
Prerequisites
- Grafana instance with admin access
- Slack workspace admin permissions
- Microsoft Teams channel access
- At least one configured data source in Grafana
What this solves
Grafana's unified alerting system lets you create sophisticated alert rules and route notifications to multiple channels. This tutorial shows you how to set up Slack and Microsoft Teams integrations so your team gets notified when metrics cross thresholds or systems go down.
Prerequisites
You need a working Grafana instance with admin access and at least one data source configured. If you don't have Grafana set up yet, check our Grafana installation guide.
Step-by-step configuration
Create Slack webhook URL
First, create an incoming webhook in your Slack workspace to receive alerts from Grafana.
- Go to
https://api.slack.com/appsand click "Create New App" - Select "From scratch" and name your app "Grafana Alerts"
- Choose your workspace and click "Create App"
- In the left sidebar, click "Incoming Webhooks"
- Toggle "Activate Incoming Webhooks" to On
- Click "Add New Webhook to Workspace"
- Select the channel where you want alerts and click "Allow"
- Copy the webhook URL that starts with
https://hooks.slack.com/services/
Create Microsoft Teams webhook URL
Set up an incoming webhook connector in Microsoft Teams to receive Grafana notifications.
- Open Microsoft Teams and navigate to your target channel
- Click the three dots next to the channel name
- Select "Connectors"
- Find "Incoming Webhook" and click "Configure"
- Name it "Grafana Alerts" and optionally upload an icon
- Click "Create"
- Copy the webhook URL that ends with
@outlook.office.com - Click "Done"
Configure Slack contact point in Grafana
Create a contact point that defines how Grafana sends alerts to Slack.
- Log into Grafana as an admin user
- Navigate to "Alerting" → "Contact points"
- Click "Add contact point"
- Set the name to "slack-alerts"
- From the "Contact point type" dropdown, select "Slack"
- Paste your Slack webhook URL in the "Webhook URL" field
- Set the username to "Grafana"
- In the "Title" field, enter:
{{ .GroupLabels.alertname }} - {{ .Status | title }} - In the "Text" field, enter:
{{ range .Alerts }}
Alert: {{ .Annotations.summary }}
Description: {{ .Annotations.description }}
Status: {{ .Status }}
Labels: {{ range .Labels.SortedPairs }}{{ .Name }}: {{ .Value }} {{ end }}
{{ end }}
Configure Teams contact point in Grafana
Create a second contact point for Microsoft Teams notifications.
- In Grafana, go to "Alerting" → "Contact points"
- Click "Add contact point"
- Set the name to "teams-alerts"
- From the "Contact point type" dropdown, select "Microsoft Teams"
- Paste your Teams webhook URL in the "Webhook URL" field
- In the "Title" field, enter:
{{ .GroupLabels.alertname }} Alert - In the "Message" field, enter:
{{ range .Alerts }}
Summary: {{ .Annotations.summary }}
Description: {{ .Annotations.description }}
Status: {{ .Status }}
Severity: {{ .Labels.severity }}
Time: {{ .StartsAt.Format "2006-01-02 15:04:05" }}
{{ end }}
Create notification policy
Set up a notification policy that determines which alerts get sent to which contact points.
- Navigate to "Alerting" → "Notification policies"
- Click "Edit" on the default policy
- Under "Default contact point", select "slack-alerts"
- Click "Save policy"
- Click "Add nested policy"
- Add a matcher: Label "severity" = "critical"
- Set contact points to both "slack-alerts" and "teams-alerts"
- Set "Group by" to "alertname"
- Set "Group wait" to "10s"
- Set "Group interval" to "5m"
- Set "Repeat interval" to "12h"
- Click "Save policy"
Create a test alert rule
Create a simple alert rule to test your notification setup.
- Go to "Alerting" → "Alert rules"
- Click "Create alert rule"
- Set the rule name to "High CPU Usage"
- In query A, select your Prometheus data source
- Enter the query:
100 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) - Click "Set as alert condition" for query A
- Set the condition to "IS ABOVE" and threshold to "80"
- Set evaluation to "1m" for both "For" and "Evaluate every"
- Add folder "Test Alerts"
- Set evaluation group to "cpu-alerts"
- Add annotations:
- summary:
High CPU usage detected - description:
CPU usage is above 80% on {{ $labels.instance }}
- Add labels:
- severity:
warning - team:
infrastructure
- Click "Save rule and exit"
Create a critical alert rule
Create a critical severity alert to test the nested notification policy.
- Click "Create alert rule" again
- Set the rule name to "Service Down"
- In query A, enter:
up{job="node"} == 0 - Click "Set as alert condition" for query A
- Set condition to "IS EQUAL TO" and threshold to "1"
- Set evaluation to "30s" for both fields
- Use folder "Test Alerts" and group "service-health"
- Add annotations:
- summary:
Service is down - description:
{{ $labels.job }} on {{ $labels.instance }} is not responding
- Add labels:
- severity:
critical - team:
infrastructure
- Click "Save rule and exit"
Verify your setup
Test your alerting configuration to ensure notifications are working correctly.
# View alert rules in Grafana UI
Go to Alerting → Alert rules
Check the State column shows "Normal" or "Pending"
Test contact points
Go to Alerting → Contact points
Click "Edit" on each contact point
Click "Test" button to send test notification
You should receive test notifications in both Slack and Teams. Check the Grafana logs if notifications fail:
sudo journalctl -u grafana-server -f
Advanced notification templates
Enhanced Slack template with colors
Improve your Slack notifications with color coding and better formatting.
{
"attachments": [
{
"color": "{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}",
"title": "{{ .GroupLabels.alertname }} - {{ .Status | title }}",
"text": "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ .Annotations.description }}{{ end }}",
"fields": [
{
"title": "Severity",
"value": "{{ .GroupLabels.severity }}",
"short": true
},
{
"title": "Instance",
"value": "{{ .GroupLabels.instance }}",
"short": true
}
],
"footer": "Grafana",
"ts": {{ .GroupLabels.timestamp }}
}
]
}
Rich Teams template with adaptive cards
Use Microsoft Teams adaptive cards for better visual presentation.
{
"@type": "MessageCard",
"@context": "https://schema.org/extensions",
"summary": "{{ .GroupLabels.alertname }} Alert",
"themeColor": "{{ if eq .Status "firing" }}FF0000{{ else }}00FF00{{ end }}",
"title": "🚨 {{ .GroupLabels.alertname }}",
"text": "{{ range .Alerts }}{{ .Annotations.description }}{{ end }}",
"sections": [
{
"facts": [
{
"name": "Status",
"value": "{{ .Status }}"
},
{
"name": "Severity",
"value": "{{ .GroupLabels.severity }}"
},
{
"name": "Instance",
"value": "{{ .GroupLabels.instance }}"
},
{
"name": "Started",
"value": "{{ .GroupLabels.startsAt }}"
}
]
}
],
"potentialAction": [
{
"@type": "OpenUri",
"name": "View in Grafana",
"targets": [
{
"os": "default",
"uri": "https://your-grafana-url/alerting/list"
}
]
}
]
}
Troubleshooting webhooks
Test Slack webhook manually
Verify your Slack webhook works outside of Grafana.
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"Test message from Grafana"}' \
YOUR_SLACK_WEBHOOK_URL
Test Teams webhook manually
Check your Teams webhook with a direct HTTP request.
curl -X POST -H 'Content-Type: application/json' \
--data '{
"@type": "MessageCard",
"@context": "https://schema.org/extensions",
"summary": "Test Alert",
"text": "This is a test message from Grafana"
}' \
YOUR_TEAMS_WEBHOOK_URL
Configure alert silencing
Create silence rules
Set up maintenance windows to suppress alerts during planned downtime.
- Go to "Alerting" → "Silences"
- Click "Add silence"
- Add matchers for the alerts you want to silence:
- alertname = "High CPU Usage"
- instance = "server01:9100"
- Set start time to now and duration to "2h"
- Add comment: "Planned maintenance window"
- Click "Create"
Monitor alert delivery
Track notification delivery success in Grafana's built-in metrics. If you have Prometheus monitoring Grafana, check these metrics for alerting health:
# Total notifications sent
grafana_alerting_notifications_sent_total
Notification failures
grafana_alerting_notifications_failed_total
Alert rule evaluation duration
grafana_alerting_rule_evaluation_duration_seconds
Active alert rules
grafana_alerting_active_alert_rules
Create dashboard panels with these metrics to monitor your alerting system's health. You can also set up alerts on these metrics to get notified if your notification system fails.
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| Slack notifications not received | Invalid webhook URL or disabled app | Regenerate webhook URL in Slack app settings |
| Teams notifications fail | Webhook expired or channel deleted | Recreate incoming webhook connector in Teams |
| Alert rules always firing | Incorrect query or threshold | Test query in Explore view, adjust threshold values |
| No notifications despite firing alerts | Notification policy not matching | Check label matchers in notification policies |
| Too many notifications | Short group interval or no grouping | Increase group interval to 10m, group by alertname |
| Template errors in messages | Invalid Go template syntax | Test templates with simple variables first |
| Webhook timeouts | Network connectivity or slow response | Check firewall rules, test webhook URLs manually |
Best practices for production alerting
Follow these practices when deploying Grafana alerting in production environments:
- Use different channels for different severities: Send critical alerts to incident response channels, warnings to monitoring channels
- Implement escalation policies: Create nested policies that escalate to management after a certain time
- Group related alerts: Use appropriate grouping to avoid notification storms during widespread issues
- Set meaningful repeat intervals: Balance between awareness and alert fatigue (12-24 hours for most alerts)
- Test your contact points regularly: Set up automated tests to ensure webhooks remain functional
- Monitor your monitoring: Alert on Grafana metrics to detect alerting system failures
- Document runbooks: Include links to troubleshooting steps in alert descriptions
For complex environments, consider using Prometheus Alertmanager for more advanced routing and inhibition rules.
Next steps
- Configure advanced Grafana dashboards and alerting
- Implement Grafana alerting with Prometheus and InfluxDB
- Set up Grafana Enterprise SSO authentication
- Configure Grafana LDAP authentication with Active Directory
- Set up Grafana high availability cluster with load balancer