Configure Kubernetes network policies with Calico CNI for microsegmentation and security enforcement

Advanced 45 min Apr 08, 2026 14 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Learn to implement advanced network security in Kubernetes using Calico CNI. Configure namespace-based microsegmentation, application-level policies, and comprehensive monitoring for enterprise-grade cluster protection.

Prerequisites

  • Running Kubernetes cluster (1.24+ recommended)
  • Cluster administrator access with kubectl configured
  • Understanding of Kubernetes networking concepts
  • Basic knowledge of YAML and network security principles

What this solves

Kubernetes network policies with Calico CNI provide advanced microsegmentation and security enforcement for container workloads. This tutorial shows you how to implement namespace-based isolation, application-level traffic controls, and comprehensive monitoring to secure your cluster against lateral movement and unauthorized network access.

Prerequisites

  • Running Kubernetes cluster (1.24+ recommended)
  • Cluster administrator access with kubectl configured
  • Understanding of Kubernetes networking concepts
  • Basic knowledge of YAML and network security principles

Step-by-step installation and configuration

Install Calico CNI on Kubernetes cluster

Install Calico as the Container Network Interface to enable network policy enforcement capabilities.

kubectl create namespace calico-system
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml

Configure Calico installation manifest

Create a custom installation configuration to enable policy enforcement and monitoring features.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
    - blockSize: 26
      cidr: 192.168.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
  registry: quay.io/
  imagePullSecrets:
  - name: tigera-pull-secret
  nodeUpdateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  flexVolumePath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
  nodeMetricsPort: 9091
  typhaMetricsPort: 9093

Apply Calico configuration

Deploy Calico with the custom configuration and verify all components are running.

kubectl apply -f /tmp/calico-installation.yaml
kubectl get pods -n calico-system --watch

Install Calico CLI tool

Download and install the calicoctl command-line tool for advanced policy management.

curl -L https://github.com/projectcalico/calico/releases/download/v3.26.4/calicoctl-linux-amd64 -o calicoctl
sudo chmod +x calicoctl
sudo mv calicoctl /usr/local/bin/
calicoctl version

Configure calicoctl for cluster access

Set up calicoctl to communicate with your Kubernetes cluster using the existing kubeconfig.

apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"

Create test application namespaces

Set up separate namespaces for different application tiers to demonstrate microsegmentation.

kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace database
kubectl label namespace frontend tier=frontend
kubectl label namespace backend tier=backend
kubectl label namespace database tier=database

Deploy test applications

Deploy sample applications in each namespace to test network policy enforcement.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-app
  namespace: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx:1.24
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
  namespace: frontend
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-app
  namespace: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
        tier: backend
    spec:
      containers:
      - name: app
        image: httpd:2.4
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: backend-service
  namespace: backend
spec:
  selector:
    app: backend
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: database-app
  namespace: database
spec:
  replicas: 1
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
        tier: database
    spec:
      containers:
      - name: postgres
        image: postgres:15
        env:
        - name: POSTGRES_PASSWORD
          value: "securepassword123"
        ports:
        - containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
  name: database-service
  namespace: database
spec:
  selector:
    app: database
  ports:
  - port: 5432
    targetPort: 5432

Apply test applications

Deploy the test applications and verify they can communicate before implementing network policies.

kubectl apply -f /tmp/test-apps.yaml
kubectl get pods -A | grep -E "frontend|backend|database"

Create namespace-based network policies for microsegmentation

Implement default deny policy

Create a default deny-all policy for each namespace to establish a zero-trust network foundation.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: frontend
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: database
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Configure DNS access policy

Allow DNS resolution for all pods to maintain basic cluster functionality.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: frontend
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: database
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53

Apply network policies

Deploy the default deny and DNS policies to establish baseline network security.

kubectl apply -f /tmp/default-deny-policy.yaml
kubectl apply -f /tmp/allow-dns-policy.yaml
kubectl label namespace kube-system name=kube-system

Implement application-level network security policies

Create frontend to backend communication policy

Allow frontend applications to communicate with backend services on specific ports.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-to-backend
  namespace: backend
spec:
  podSelector:
    matchLabels:
      tier: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          tier: frontend
      podSelector:
        matchLabels:
          tier: frontend
    ports:
    - protocol: TCP
      port: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-egress-to-backend
  namespace: frontend
spec:
  podSelector:
    matchLabels:
      tier: frontend
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          tier: backend
      podSelector:
        matchLabels:
          tier: backend
    ports:
    - protocol: TCP
      port: 80

Create backend to database communication policy

Allow backend applications to access database services with port-specific restrictions.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-to-database
  namespace: database
spec:
  podSelector:
    matchLabels:
      tier: database
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          tier: backend
      podSelector:
        matchLabels:
          tier: backend
    ports:
    - protocol: TCP
      port: 5432
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-egress-to-database
  namespace: backend
spec:
  podSelector:
    matchLabels:
      tier: backend
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          tier: database
      podSelector:
        matchLabels:
          tier: database
    ports:
    - protocol: TCP
      port: 5432

Create external access policy for frontend

Allow external traffic to reach frontend services while maintaining internal security.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-external-to-frontend
  namespace: frontend
spec:
  podSelector:
    matchLabels:
      tier: frontend
  policyTypes:
  - Ingress
  ingress:
  - from: []
    ports:
    - protocol: TCP
      port: 80

Apply application-level policies

Deploy the communication policies between application tiers.

kubectl apply -f /tmp/frontend-to-backend-policy.yaml
kubectl apply -f /tmp/backend-to-database-policy.yaml
kubectl apply -f /tmp/frontend-external-policy.yaml

Configure advanced Calico policy for time-based access

Create Calico-specific policies with advanced features like time-based restrictions.

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: time-based-database-access
  namespace: database
spec:
  selector: tier == "database"
  types:
  - Ingress
  ingress:
  - action: Allow
    protocol: TCP
    source:
      selector: tier == "backend"
    destination:
      ports:
      - 5432
    metadata:
      annotations:
        schedule: "0 6-22   *"  # Allow access only during business hours
  - action: Deny
    protocol: TCP
    destination:
      ports:
      - 5432

Apply advanced Calico policy

Deploy the advanced time-based policy using calicoctl.

calicoctl apply -f /tmp/calico-advanced-policy.yaml

Monitor and troubleshoot Calico network policies

Enable Calico policy logging

Configure log collection for network policy decisions to monitor traffic flows.

apiVersion: projectcalico.org/v3
kind: FelixConfiguration
metadata:
  name: default
spec:
  logSeverityScreen: Info
  logFilePath: /var/log/calico/felix.log
  policySyncPathPrefix: /var/run/nodeagent
  prometheusMetricsEnabled: true
  prometheusMetricsPort: 9091
  reportingInterval: 30s

Deploy policy monitoring tools

Install monitoring components to track network policy effectiveness.

calicoctl apply -f /tmp/felix-configuration.yaml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico-policy-only.yaml

Test policy enforcement

Verify that network policies are working correctly by testing connections between pods.

# Test allowed connection (frontend to backend)
kubectl exec -n frontend deployment/frontend-app -- curl -m 5 backend-service.backend.svc.cluster.local

Test blocked connection (frontend to database - should fail)

kubectl exec -n frontend deployment/frontend-app -- curl -m 5 database-service.database.svc.cluster.local:5432

Test allowed connection (backend to database)

kubectl exec -n backend deployment/backend-app -- nc -zv database-service.database.svc.cluster.local 5432

Monitor policy violations

Check Calico logs for policy violations and blocked connections.

kubectl logs -n calico-system -l k8s-app=calico-node | grep -i "policy"
calicoctl get networkpolicy --all-namespaces
calicoctl get profile --all-namespaces

Set up metrics collection

Configure Prometheus to scrape Calico metrics for comprehensive monitoring.

apiVersion: v1
kind: Service
metadata:
  name: calico-node-metrics
  namespace: calico-system
  labels:
    k8s-app: calico-node
spec:
  ports:
  - name: calico-metrics-port
    port: 9091
    targetPort: 9091
  selector:
    k8s-app: calico-node
---
apiVersion: v1
kind: ServiceMonitor
metadata:
  name: calico-node
  namespace: calico-system
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  endpoints:
  - port: calico-metrics-port
    interval: 30s
    path: /metrics

Apply monitoring configuration

Deploy the metrics collection service for ongoing policy monitoring.

kubectl apply -f /tmp/calico-metrics-service.yaml
Note: Network policies are namespace-scoped and enforce at the pod level. Always test policy changes in a staging environment before applying to production workloads.

Verify your setup

# Check Calico component status
kubectl get pods -n calico-system

Verify network policies are applied

kubectl get networkpolicy --all-namespaces

Test policy enforcement

kubectl exec -n frontend deployment/frontend-app -- curl -m 5 backend-service.backend.svc.cluster.local

Check Calico node status

calicoctl node status

View policy statistics

calicoctl get policy --all-namespaces -o wide

Common issues

SymptomCauseFix
Pods cannot resolve DNSDNS policy not configuredApply DNS egress policy and label kube-system namespace
Legitimate connections blockedOverly restrictive policiesReview pod selectors and namespace labels in policies
Policy not enforcingCalico node not runningCheck kubectl get pods -n calico-system and restart failed pods
calicoctl commands failConfiguration file missingCreate /etc/calico/calicoctl.cfg with correct kubeconfig path
Metrics not availablePrometheus integration not configuredEnable Felix metrics and deploy ServiceMonitor
Cross-namespace communication failsMissing namespace selectorsEnsure both ingress and egress policies include correct namespace labels

Next steps

Automated install script

Run this to automate the entire setup

#kubernetes #calico #network-policies #microsegmentation #security

Need help?

Don't want to manage this yourself?

We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.

Talk to an engineer