Integrate Consul with Kubernetes service discovery and automatic configuration

Intermediate 45 min Apr 11, 2026 176 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up Consul for dynamic service discovery in Kubernetes clusters with automatic service registration, health checks, and configuration management for microservices orchestration.

Prerequisites

  • Kubernetes cluster with kubectl access
  • Root or sudo access
  • At least 3 nodes for Consul cluster
  • Basic understanding of Kubernetes concepts

What this solves

Integrating Consul with Kubernetes provides dynamic service discovery, health checking, and configuration management for containerized applications. This eliminates the need for hard-coded service endpoints and enables automatic service registration when pods are created or destroyed.

This tutorial covers installing a Consul server cluster, deploying Consul agents on Kubernetes nodes, configuring automatic service registration, and setting up health checks for seamless microservices communication.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you get the latest versions.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install Consul server

Download and install Consul on the server nodes that will form the Consul cluster.

cd /tmp
wget https://releases.hashicorp.com/consul/1.17.0/consul_1.17.0_linux_amd64.zip
sudo apt install -y unzip
unzip consul_1.17.0_linux_amd64.zip
sudo mv consul /usr/local/bin/
sudo chmod +x /usr/local/bin/consul

Create Consul user and directories

Create a dedicated user for Consul and set up the required directory structure with proper permissions.

sudo useradd --system --home /etc/consul.d --shell /bin/false consul
sudo mkdir -p /opt/consul /etc/consul.d
sudo chown consul:consul /opt/consul /etc/consul.d
sudo chmod 755 /opt/consul /etc/consul.d

Generate Consul encryption key

Create an encryption key for securing Consul cluster communication. Save this key as you'll need it on all cluster nodes.

consul keygen

Configure Consul server

Create the main Consul server configuration file with cluster settings and encryption.

datacenter = "k8s-dc1"
data_dir = "/opt/consul"
log_level = "INFO"
node_name = "consul-server-1"
server = true
bootstrap_expect = 3
bind_addr = "203.0.113.10"
client_addr = "0.0.0.0"
retry_join = ["203.0.113.11", "203.0.113.12"]
encrypt = "your-encryption-key-here"
ui_config {
  enabled = true
}
connect {
  enabled = true
}
ports {
  grpc = 8502
}

Create Consul systemd service

Set up a systemd service file to manage the Consul server process.

[Unit]
Description=Consul
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/consul.d/consul.hcl

[Service]
Type=notify
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent -config-dir=/etc/consul.d/
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Start Consul server cluster

Enable and start the Consul service on all server nodes to form the cluster.

sudo systemctl daemon-reload
sudo systemctl enable consul
sudo systemctl start consul
sudo systemctl status consul

Install Helm for Kubernetes deployment

Install Helm package manager to deploy Consul agents on Kubernetes using the official Helm chart.

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update

Create Kubernetes namespace for Consul

Create a dedicated namespace for Consul components in your Kubernetes cluster.

kubectl create namespace consul

Create Consul Kubernetes configuration

Configure Helm values to connect Kubernetes Consul agents to your external Consul cluster.

global:
  name: consul
  datacenter: k8s-dc1
  gossipEncryption:
    secretName: consul-gossip-encryption-key
    secretKey: gossip-encryption-key
  tls:
    enabled: false
  acls:
    manageSystemACLs: false

server:
  enabled: false
  
client:
  enabled: true
  join:
    - "203.0.113.10"
    - "203.0.113.11" 
    - "203.0.113.12"
  grpc: true

connectInject:
  enabled: true
  default: false

syncCatalog:
  enabled: true
  toConsul: true
  toK8S: true
  k8sPrefix: "k8s-"
  consulPrefix: "consul-"

dns:
  enabled: true
  enableRedirection: true

Create Consul encryption secret

Store the Consul encryption key as a Kubernetes secret for the agents to use.

kubectl create secret generic consul-gossip-encryption-key \
  --from-literal="gossip-encryption-key=your-encryption-key-here" \
  --namespace consul

Deploy Consul agents to Kubernetes

Install the Consul Helm chart with your configuration to deploy agents on all Kubernetes nodes.

helm install consul hashicorp/consul \
  --namespace consul \
  --values consul-values.yaml \
  --version "1.3.0"

Configure automatic service registration

Create a sample application with Consul service annotations for automatic registration.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
      annotations:
        consul.hashicorp.com/connect-inject: "true"
        consul.hashicorp.com/service-name: "web-app"
        consul.hashicorp.com/service-port: "8080"
        consul.hashicorp.com/service-tags: "web,http"
    spec:
      containers:
      - name: web-app
        image: nginx:latest
        ports:
        - containerPort: 80
          name: http
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
  namespace: default
  annotations:
    consul.hashicorp.com/service-name: "web-app"
    consul.hashicorp.com/service-tags: "web,loadbalancer"
spec:
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP

Deploy the sample application

Apply the sample application configuration to test automatic service registration.

kubectl apply -f sample-app.yaml

Configure health checks

Create a more advanced service configuration with custom health checks and metadata.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-service
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api-service
  template:
    metadata:
      labels:
        app: api-service
      annotations:
        consul.hashicorp.com/connect-inject: "true"
        consul.hashicorp.com/service-name: "api-service"
        consul.hashicorp.com/service-port: "3000"
        consul.hashicorp.com/service-tags: "api,backend,v1"
        consul.hashicorp.com/service-meta-version: "1.0.0"
        consul.hashicorp.com/service-meta-team: "platform"
    spec:
      containers:
      - name: api-service
        image: node:18-alpine
        ports:
        - containerPort: 3000
          name: http
        command: ["sh", "-c"]
        args:
          - |
            cat > server.js << EOF
            const http = require('http');
            const server = http.createServer((req, res) => {
              if (req.url === '/health') {
                res.writeHead(200, {'Content-Type': 'application/json'});
                res.end(JSON.stringify({status: 'healthy', timestamp: new Date()}));
              } else {
                res.writeHead(200, {'Content-Type': 'application/json'});
                res.end(JSON.stringify({message: 'API Service Running', version: '1.0.0'}));
              }
            });
            server.listen(3000, () => console.log('Server running on port 3000'));
            EOF
            node server.js
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 10
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 10

Apply the API service configuration

Deploy the API service with health check configuration to demonstrate advanced Consul integration.

kubectl apply -f api-service.yaml

Configure firewall rules

Open the necessary ports for Consul communication between nodes and Kubernetes integration.

sudo ufw allow 8300/tcp comment 'Consul server RPC'
sudo ufw allow 8301/tcp comment 'Consul Serf LAN'
sudo ufw allow 8301/udp comment 'Consul Serf LAN'
sudo ufw allow 8302/tcp comment 'Consul Serf WAN'
sudo ufw allow 8302/udp comment 'Consul Serf WAN'
sudo ufw allow 8500/tcp comment 'Consul HTTP API'
sudo ufw allow 8502/tcp comment 'Consul gRPC API'
sudo ufw allow 8600/tcp comment 'Consul DNS'
sudo ufw allow 8600/udp comment 'Consul DNS'
sudo firewall-cmd --permanent --add-port=8300/tcp --add-port=8301/tcp --add-port=8301/udp
sudo firewall-cmd --permanent --add-port=8302/tcp --add-port=8302/udp --add-port=8500/tcp
sudo firewall-cmd --permanent --add-port=8502/tcp --add-port=8600/tcp --add-port=8600/udp
sudo firewall-cmd --reload

Verify your setup

Check that your Consul cluster is operational and Kubernetes services are being registered automatically.

# Check Consul cluster members
consul members

Verify Consul leader election

consul operator raft list-peers

Check Kubernetes pods are running

kubectl get pods -n consul

Verify services are registered in Consul

consul catalog services

Check specific service details

consul catalog nodes -service web-app

Test DNS resolution

dig @203.0.113.10 -p 8600 web-app.service.consul

Check service health

consul health checks -service web-app

Configure service mesh communication

Enable service-to-service communication using Consul Connect for secure microservices networking.

Create service intentions

Define which services can communicate with each other using Consul's intention system.

Kind = "service-intentions"
Name = "api-service"
Sources = [
  {
    Name   = "web-app"
    Action = "allow"
  },
  {
    Name   = "*"
    Action = "deny"
  }
]

Apply service intentions

Load the intention configuration into Consul to enforce service communication policies.

consul config write api-intentions.hcl

Configure service mesh for existing deployments

Update your existing services to use Consul Connect sidecar proxies for secure communication.

# Restart deployments to inject Connect sidecars
kubectl rollout restart deployment/web-app
kubectl rollout restart deployment/api-service

Check sidecar injection

kubectl get pods -o wide

Common issues

Symptom Cause Fix
Consul agents can't join cluster Incorrect encryption key or network connectivity Verify encryption key matches and check firewall rules: consul members
Services not appearing in Consul catalog Sync catalog not configured or annotations missing Check syncCatalog.enabled and verify pod annotations: kubectl describe pod
Connect injection not working Connect inject webhook not running Check webhook pods: kubectl get pods -n consul -l component=connect-injector
DNS resolution failing Consul DNS not configured in cluster Verify DNS service: kubectl get svc -n consul consul-dns
Health checks always failing Incorrect health check endpoint or timing Check application logs and adjust probe settings: kubectl logs pod-name
Service mesh traffic blocked Default deny intentions or missing intentions Create allow intentions: consul intention create web-app api-service

Next steps

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.