Set up InfluxDB alerting with Kapacitor and notifications

Intermediate 25 min Apr 22, 2026 100 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Configure comprehensive alerting for InfluxDB using Kapacitor with email, Slack, and webhook notifications. Set up real-time monitoring, thresholds, and automated responses for time-series data anomalies.

Prerequisites

  • Root access to server
  • Internet connection for package downloads
  • Email account for SMTP notifications
  • Slack workspace for webhook integration

What this solves

InfluxDB provides excellent time-series data storage, but monitoring that data for anomalies and threshold breaches requires a dedicated alerting system. Kapacitor, InfluxDB's native stream processing engine, analyzes your data in real-time and triggers notifications when conditions are met. This tutorial shows you how to set up production-ready alerting with multiple notification channels including email, Slack, and custom webhooks.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you have the latest security updates and package information.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install InfluxDB

Add the InfluxDB repository and install both InfluxDB and Kapacitor from the official packages.

curl -s https://repos.influxdata.com/influxdata-archive_compat.key | gpg --dearmor > /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' > /etc/apt/sources.list.d/influxdata.list
sudo apt update
sudo apt install -y influxdb kapacitor
cat <

Configure InfluxDB

Set up basic InfluxDB configuration with authentication enabled for production security.

[meta]
  dir = "/var/lib/influxdb/meta"

[data]
  dir = "/var/lib/influxdb/data"
  wal-dir = "/var/lib/influxdb/wal"

[http]
  enabled = true
  bind-address = ":8086"
  auth-enabled = true
  log-enabled = true
  write-tracing = false
  pprof-enabled = true
  https-enabled = false

Start InfluxDB and create admin user

Enable and start InfluxDB, then create an administrative user for secure access.

sudo systemctl enable --now influxdb
sudo systemctl status influxdb

Create the admin user and database for monitoring data:

influx -execute "CREATE USER admin WITH PASSWORD 'secure_admin_password123!' WITH ALL PRIVILEGES"
influx -username admin -password 'secure_admin_password123!' -execute "CREATE DATABASE monitoring"
influx -username admin -password 'secure_admin_password123!' -execute "CREATE DATABASE kapacitor"

Configure Kapacitor

Set up Kapacitor to connect to InfluxDB with authentication and configure notification channels.

hostname = "localhost"
data_dir = "/var/lib/kapacitor"
skip-config-overrides = false
default-retention-policy = ""

[http]
  bind-address = ":9092"
  auth-enabled = false
  log-enabled = true
  write-tracing = false
  pprof-enabled = false
  https-enabled = false

[[influxdb]]
  enabled = true
  name = "default"
  default = true
  urls = ["http://localhost:8086"]
  username = "admin"
  password = "secure_admin_password123!"
  ssl-ca = ""
  ssl-cert = ""
  ssl-key = ""
  insecure-skip-verify = false
  timeout = "5s"
  disable-subscriptions = false
  subscription-protocol = "http"
  subscription-mode = "cluster"
  kapacitor-hostname = ""
  http-port = 0
  udp-bind = ""
  udp-buffer = 1000
  udp-read-buffer = 0
  startup-timeout = "5m0s"
  subscriptions-sync-interval = "1m0s"
  _kapacitor-hostname = ""

[logging]
  file = "/var/log/kapacitor/kapacitor.log"
  level = "INFO"

[smtp]
  enabled = true
  host = "smtp.gmail.com"
  port = 587
  username = "your-email@gmail.com"
  password = "your-app-password"
  no-verify = false
  global = false
  state-changes-only = false
  from = "your-email@gmail.com"
  idle-timeout = "30s"

[slack]
  enabled = true
  url = "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
  channel = "#alerts"
  username = "kapacitor"
  icon-emoji = ":exclamation:"
  global = false
  state-changes-only = false

Start Kapacitor service

Enable and start Kapacitor to begin processing InfluxDB data streams.

sudo systemctl enable --now kapacitor
sudo systemctl status kapacitor

Create sample data for testing

Insert some test data into InfluxDB to verify alerting functionality works correctly.

influx -username admin -password 'secure_admin_password123!' -database monitoring -execute '
INSERT cpu_usage,host=server01,region=us-east value=45.2
INSERT cpu_usage,host=server01,region=us-east value=78.9
INSERT cpu_usage,host=server01,region=us-east value=92.1
INSERT memory_usage,host=server01,region=us-east value=67.4
INSERT memory_usage,host=server01,region=us-east value=89.3
INSERT disk_usage,host=server01,region=us-east value=34.7'

Configure alert rules and notifications

Create CPU usage alert

Define a TICKscript to monitor CPU usage and trigger alerts when thresholds are exceeded.

stream
    |from()
        .measurement('cpu_usage')
    |eval(lambda: "value")
        .as('cpu_percent')
    |alert()
        .crit(lambda: "cpu_percent" > 80.0)
        .warn(lambda: "cpu_percent" > 70.0)
        .message('{{ .Time }}: {{ index .Tags "host" }} CPU usage is {{ index .Fields "cpu_percent" }}%')
        .id('{{ index .Tags "host" }}/cpu_usage')
        .idField('id')
        .levelField('level')
        .messageField('message')
        .durationField('duration')
        .email()
            .to('admin@example.com')
        .slack()
            .channel('#alerts')

Define and enable the CPU alert task:

kapacitor define cpu_alert -tick /tmp/cpu_alert.tick
kapacitor enable cpu_alert
kapacitor list tasks

Create memory usage alert

Set up monitoring for memory usage with different notification channels for different severity levels.

stream
    |from()
        .measurement('memory_usage')
    |eval(lambda: "value")
        .as('memory_percent')
    |alert()
        .crit(lambda: "memory_percent" > 90.0)
        .warn(lambda: "memory_percent" > 75.0)
        .message('{{ .Time }}: {{ index .Tags "host" }} Memory usage is {{ index .Fields "memory_percent" }}%')
        .id('{{ index .Tags "host" }}/memory_usage')
        .idField('id')
        .levelField('level')
        .messageField('message')
        .durationField('duration')
        .email()
            .to('admin@example.com')
        .slack()
            .channel('#alerts')
        .post('http://example.com/webhook')
            .header('Content-Type', 'application/json')
kapacitor define memory_alert -tick /tmp/memory_alert.tick
kapacitor enable memory_alert

Configure webhook notifications

Set up custom webhook integration for external monitoring systems or custom alerting workflows.

stream
    |from()
        .measurement('disk_usage')
    |eval(lambda: "value")
        .as('disk_percent')
    |alert()
        .crit(lambda: "disk_percent" > 85.0)
        .warn(lambda: "disk_percent" > 70.0)
        .message('Disk usage alert: {{ index .Tags "host" }} at {{ index .Fields "disk_percent" }}%')
        .id('{{ index .Tags "host" }}/disk_usage')
        .post('https://hooks.example.com/kapacitor-webhook')
            .header('Content-Type', 'application/json')
            .header('Authorization', 'Bearer your-webhook-token')
kapacitor define webhook_alert -tick /tmp/webhook_alert.tick
kapacitor enable webhook_alert

Set up alert aggregation and deadman switches

Configure advanced alerting patterns to prevent alert storms and detect when data stops flowing.

stream
    |from()
        .measurement('cpu_usage')
    |deadman(2.0m, 1m)
        .id('{{ index .Tags "host" }}/deadman')
        .message('{{ .Time }}: No data received from {{ index .Tags "host" }} for 2 minutes')
        .email()
            .to('admin@example.com')
        .slack()
            .channel('#critical')
kapacitor define deadman_alert -tick /tmp/deadman_alert.tick
kapacitor enable deadman_alert

Test alerting pipeline

Trigger test alerts

Insert data that exceeds your configured thresholds to verify notifications work correctly.

influx -username admin -password 'secure_admin_password123!' -database monitoring -execute '
INSERT cpu_usage,host=server01,region=us-east value=95.7
INSERT memory_usage,host=server01,region=us-east value=92.4
INSERT disk_usage,host=server01,region=us-east value=87.2'

Monitor alert status

Check Kapacitor logs and alert status to confirm your rules are processing correctly.

kapacitor list tasks
kapacitor show cpu_alert
sudo journalctl -u kapacitor -f

Verify your setup

sudo systemctl status influxdb kapacitor
kapacitor list tasks
influx -username admin -password 'secure_admin_password123!' -execute "SHOW DATABASES"
curl -G http://localhost:9092/kapacitor/v1/tasks
kapacitor show-topic alerts

Check that you can query your monitoring data and that Kapacitor is processing streams:

influx -username admin -password 'secure_admin_password123!' -database monitoring -execute "SELECT * FROM cpu_usage ORDER BY time DESC LIMIT 5"
kapacitor stats general

Common issues

SymptomCauseFix
Kapacitor can't connect to InfluxDBAuthentication failure or wrong credentialsVerify username/password in /etc/kapacitor/kapacitor.conf
Email notifications not sendingSMTP configuration incorrectTest SMTP settings: kapacitor define test_smtp -tick test_email.tick
Slack notifications failingInvalid webhook URLCheck webhook URL and channel permissions in Slack settings
Alert rules not triggeringNo matching data or incorrect queryTest query in InfluxDB first: SELECT * FROM cpu_usage
Kapacitor service won't startConfiguration syntax errorCheck logs: sudo journalctl -u kapacitor -n 50
Permission denied errorsIncorrect file ownershipsudo chown -R kapacitor:kapacitor /var/lib/kapacitor
Never use chmod 777. It gives every user on the system full access to your files. Instead, fix ownership with chown and use minimal permissions like 755 for directories and 644 for files.

Production configuration tips

For production deployments, consider these additional configurations:

  • Alert batching: Group multiple alerts to prevent notification storms during outages
  • Alert inhibition: Suppress dependent alerts when higher-level services are down
  • Retention policies: Configure appropriate data retention for alert history
  • High availability: Run multiple Kapacitor instances with shared InfluxDB backend
  • Security: Enable HTTPS and authentication for Kapacitor HTTP API

You can also integrate with existing monitoring solutions like Prometheus for Kubernetes monitoring or set up Grafana alerting with multiple notification channels for a comprehensive monitoring stack.

Next steps

Running this in production?

Want this handled for you? Setting up alerting once is straightforward. Keeping it tuned, managing alert fatigue, and maintaining reliable notifications across environments is the harder part. See how we run infrastructure like this for European teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.