Configure Calico CNI to enforce network policies for pod-to-pod traffic control and namespace isolation. This tutorial covers advanced microsegmentation patterns, ingress/egress rules, and policy monitoring for production Kubernetes security.
Prerequisites
- Kubernetes cluster with administrative access
- kubectl configured and working
- Internet connectivity for downloading Calico manifests
What this solves
Kubernetes network policies with Calico provide fine-grained traffic control between pods and namespaces. This enables zero-trust networking where you explicitly allow required communications while blocking everything else by default. You need this for compliance requirements, multi-tenant clusters, or any production environment where workloads should be isolated from each other.
Step-by-step installation
Install Calico CNI with network policy support
Install Calico as the primary CNI plugin for your Kubernetes cluster with policy enforcement enabled.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml
Verify Calico installation
Check that all Calico components are running and network policies are supported.
kubectl get pods -n calico-system
kubectl get nodes -o wide
kubectl get crd | grep calico
Install calicoctl for policy management
Install the Calico CLI tool for advanced policy management and troubleshooting.
curl -L https://github.com/projectcalico/calico/releases/download/v3.26.4/calicoctl-linux-amd64 -o calicoctl
sudo chmod +x calicoctl
sudo mv calicoctl /usr/local/bin/
Configure calicoctl datastore access
Set up calicoctl to communicate with your Kubernetes cluster for policy management.
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: "kubernetes"
kubeconfig: "/root/.kube/config"
Create namespace-based network policies
Create test namespaces for microsegmentation
Set up separate namespaces to demonstrate isolation between different application tiers.
kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace database
kubectl label namespace frontend tier=frontend
kubectl label namespace backend tier=backend
kubectl label namespace database tier=database
Deploy test applications in each namespace
Create simple test pods to verify network policy enforcement between tiers.
apiVersion: v1
kind: Pod
metadata:
name: frontend-pod
namespace: frontend
labels:
app: frontend
tier: frontend
spec:
containers:
- name: frontend
image: nginx:1.25
ports:
- containerPort: 80
kubectl apply -f frontend-pod.yaml
Create backend and database test pods
Deploy additional test workloads to simulate a multi-tier application architecture.
apiVersion: v1
kind: Pod
metadata:
name: backend-pod
namespace: backend
labels:
app: backend
tier: backend
spec:
containers:
- name: backend
image: httpd:2.4
ports:
- containerPort: 80
apiVersion: v1
kind: Pod
metadata:
name: database-pod
namespace: database
labels:
app: database
tier: database
spec:
containers:
- name: database
image: postgres:16
env:
- name: POSTGRES_PASSWORD
value: "testpass123"
ports:
- containerPort: 5432
kubectl apply -f backend-pod.yaml
kubectl apply -f database-pod.yaml
Implement default deny policy
Create a global default deny policy that blocks all traffic between namespaces unless explicitly allowed.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: frontend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: backend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: database
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
kubectl apply -f default-deny.yaml
Implement pod-to-pod traffic controls
Allow frontend to backend communication
Create a policy that allows frontend pods to communicate with backend services on specific ports.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: backend
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 8080
kubectl apply -f frontend-to-backend.yaml
Allow backend to database communication
Configure policy to permit backend services to access the database tier with specific port restrictions.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-to-database
namespace: database
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
- protocol: TCP
port: 3306
kubectl apply -f backend-to-database.yaml
Configure egress policies for external access
Allow specific namespaces to access external services like DNS and package repositories.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: frontend
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
- to:
- namespaceSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-egress
namespace: backend
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
- to:
- namespaceSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
kubectl apply -f allow-dns-egress.yaml
Configure advanced selectors and monitoring
Implement pod-specific policies with labels
Create granular policies using pod labels for fine-grained access control within namespaces.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-gateway-policy
namespace: backend
spec:
podSelector:
matchLabels:
app: api-gateway
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- namespaceSelector:
matchLabels:
name: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: auth-service
ports:
- protocol: TCP
port: 9090
- to: []
ports:
- protocol: UDP
port: 53
kubectl apply -f pod-specific-policy.yaml
Enable network policy logging
Configure Calico to log network policy decisions for monitoring and troubleshooting.
calicoctl patch felixconfiguration default --patch '{"spec":{"policySyncPathPrefix":"/var/run/calico","chainInsertMode":"insert","logSeverityScreen":"Info","logFilePath":"/var/log/calico/felix.log"}}'
Create Calico GlobalNetworkPolicy for cluster-wide rules
Implement cluster-wide policies that apply across all namespaces using Calico-specific resources.
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-all-non-system
spec:
order: 100
selector: projectcalico.org/namespace != "kube-system" && projectcalico.org/namespace != "calico-system"
types:
- Ingress
- Egress
egress:
- action: Allow
destination:
selector: projectcalico.org/namespace == "kube-system"
protocol: TCP
destination:
ports:
- 53
- action: Allow
destination:
selector: projectcalico.org/namespace == "kube-system"
protocol: UDP
destination:
ports:
- 53
calicoctl apply -f global-policy.yaml
Configure monitoring and alerts for policy violations
Set up logging to track denied connections and policy violations for security monitoring.
kubectl create configmap calico-config -n kube-system --from-literal=calico_backend=kubernetes
kubectl patch daemonset calico-node -n calico-system --patch='{"spec":{"template":{"spec":{"containers":[{"name":"calico-node","env":[{"name":"FELIX_LOGSEVERITYSCREEN","value":"Info"},{"name":"FELIX_CHAININSERTMODE","value":"insert"}]}]}}}}'
Test network policy enforcement
Test allowed connections
Verify that explicitly allowed traffic flows work correctly between tiers.
kubectl exec -n frontend frontend-pod -- curl -m 5 backend-pod.backend.svc.cluster.local
kubectl exec -n backend backend-pod -- nc -zv database-pod.database.svc.cluster.local 5432
Test blocked connections
Confirm that unauthorized traffic is properly blocked by the network policies.
kubectl exec -n frontend frontend-pod -- nc -zv database-pod.database.svc.cluster.local 5432
kubectl exec -n database database-pod -- curl -m 5 frontend-pod.frontend.svc.cluster.local
Verify your setup
kubectl get networkpolicies --all-namespaces
calicoctl get networkpolicy --all-namespaces
calicoctl get globalnetworkpolicy
kubectl get pods --all-namespaces -o wide
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| Pods cannot communicate after policy application | Missing DNS egress rules | Add UDP/TCP port 53 egress to all pods |
| Services timing out intermittently | Missing kube-system namespace access | Allow egress to kube-system for system services |
| Ingress controller not working | Network policy blocking ingress pod | Create policy allowing traffic from ingress namespace |
| calicoctl commands fail | Incorrect kubeconfig path | Set KUBECONFIG environment variable or update calicoctl.cfg |
| Policies not enforced | CNI not supporting network policies | Verify Calico is installed as primary CNI |
Next steps
- Configure Kubernetes Pod Security Standards with admission controllers for comprehensive pod-level security
- Integrate OPA Gatekeeper with ArgoCD for GitOps policy management
- Set up ingress controllers with network policy integration
- Configure Calico BGP routing for on-premises clusters
- Monitor network policy metrics with Prometheus and Grafana
Running this in production?
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Calico Network Policies Installation Script
# Implements Kubernetes network policies with Calico for microsegmentation
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Configuration
CALICO_VERSION="v3.26.4"
KUBE_CONFIG="${HOME}/.kube/config"
# Functions
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
cleanup() {
log_error "Installation failed. Cleaning up..."
kubectl delete -f https://raw.githubusercontent.com/projectcalico/calico/${CALICO_VERSION}/manifests/custom-resources.yaml 2>/dev/null || true
kubectl delete -f https://raw.githubusercontent.com/projectcalico/calico/${CALICO_VERSION}/manifests/tigera-operator.yaml 2>/dev/null || true
rm -f /tmp/calico-*.yaml /tmp/calicoctl 2>/dev/null || true
}
trap cleanup ERR
usage() {
echo "Usage: $0 [OPTIONS]"
echo "Options:"
echo " --skip-test Skip deployment of test applications"
echo " --help Show this help message"
exit 1
}
# Parse arguments
SKIP_TEST=false
for arg in "$@"; do
case $arg in
--skip-test) SKIP_TEST=true ;;
--help) usage ;;
*) log_error "Unknown option: $arg"; usage ;;
esac
done
# Detect distribution
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$ID" in
ubuntu|debian) PKG_MGR="apt"; PKG_INSTALL="apt install -y" ;;
almalinux|rocky|centos|rhel|ol|fedora) PKG_MGR="dnf"; PKG_INSTALL="dnf install -y" ;;
amzn) PKG_MGR="yum"; PKG_INSTALL="yum install -y" ;;
*) log_error "Unsupported distro: $ID"; exit 1 ;;
esac
else
log_error "Cannot detect distribution"; exit 1
fi
# Check prerequisites
echo "[1/8] Checking prerequisites..."
if [ "$EUID" -eq 0 ]; then
log_warn "Running as root. Consider using a non-root user with sudo access."
fi
if ! command -v kubectl &> /dev/null; then
log_error "kubectl not found. Please install kubectl first."
exit 1
fi
if ! kubectl cluster-info &> /dev/null; then
log_error "Cannot connect to Kubernetes cluster. Check your kubeconfig."
exit 1
fi
if ! command -v curl &> /dev/null; then
log_info "Installing curl..."
case "$PKG_MGR" in
apt) apt update && $PKG_INSTALL curl ;;
*) $PKG_INSTALL curl ;;
esac
fi
# Install Calico operator
echo "[2/8] Installing Calico operator..."
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/${CALICO_VERSION}/manifests/tigera-operator.yaml
# Install Calico custom resources
echo "[3/8] Installing Calico custom resources..."
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/${CALICO_VERSION}/manifests/custom-resources.yaml
# Wait for Calico pods to be ready
echo "[4/8] Waiting for Calico components..."
kubectl wait --for=condition=Ready pod -l k8s-app=tigera-operator -n tigera-operator --timeout=300s
kubectl wait --for=condition=Ready pod -l k8s-app=calico-node -n calico-system --timeout=300s
# Install calicoctl
echo "[5/8] Installing calicoctl..."
curl -L https://github.com/projectcalico/calico/releases/download/${CALICO_VERSION}/calicoctl-linux-amd64 -o /tmp/calicoctl
chmod 755 /tmp/calicoctl
sudo mv /tmp/calicoctl /usr/local/bin/calicoctl
# Configure calicoctl
echo "[6/8] Configuring calicoctl..."
mkdir -p /etc/calico
cat > /tmp/calicoctl.cfg << EOF
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: "kubernetes"
kubeconfig: "${KUBE_CONFIG}"
EOF
sudo mv /tmp/calicoctl.cfg /etc/calico/calicoctl.cfg
sudo chmod 644 /etc/calico/calicoctl.cfg
# Create test namespaces and applications (unless skipped)
if [ "$SKIP_TEST" = false ]; then
echo "[7/8] Creating test namespaces and applications..."
# Create namespaces
kubectl create namespace frontend --dry-run=client -o yaml | kubectl apply -f -
kubectl create namespace backend --dry-run=client -o yaml | kubectl apply -f -
kubectl create namespace database --dry-run=client -o yaml | kubectl apply -f -
# Label namespaces
kubectl label namespace frontend tier=frontend --overwrite
kubectl label namespace backend tier=backend --overwrite
kubectl label namespace database tier=database --overwrite
# Create test applications
cat > /tmp/test-apps.yaml << 'EOF'
apiVersion: v1
kind: Pod
metadata:
name: frontend-pod
namespace: frontend
labels:
app: frontend
tier: frontend
spec:
containers:
- name: frontend
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: backend-pod
namespace: backend
labels:
app: backend
tier: backend
spec:
containers:
- name: backend
image: httpd:2.4
ports:
- containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: database-pod
namespace: database
labels:
app: database
tier: database
spec:
containers:
- name: database
image: postgres:16
env:
- name: POSTGRES_PASSWORD
value: "testpass123"
ports:
- containerPort: 5432
EOF
kubectl apply -f /tmp/test-apps.yaml
# Create default deny policies
cat > /tmp/default-deny.yaml << 'EOF'
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: frontend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: backend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: database
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
kubectl apply -f /tmp/default-deny.yaml
rm -f /tmp/test-apps.yaml /tmp/default-deny.yaml
else
echo "[7/8] Skipping test application deployment..."
fi
# Verify installation
echo "[8/8] Verifying Calico installation..."
log_info "Checking Calico system pods..."
kubectl get pods -n calico-system
log_info "Checking Calico CRDs..."
kubectl get crd | grep calico
log_info "Verifying calicoctl..."
calicoctl version
log_info "Checking node status..."
kubectl get nodes -o wide
# Final success message
log_info "Calico network policies installation completed successfully!"
echo
echo "Next steps:"
echo "1. Review the default deny policies in frontend, backend, and database namespaces"
echo "2. Create specific allow policies for required communication paths"
echo "3. Test connectivity between pods to verify policy enforcement"
echo "4. Use 'calicoctl get networkpolicy --all-namespaces' to list policies"
echo "5. Use 'kubectl exec -it <pod-name> -n <namespace> -- <command>' to test connectivity"
Review the script before running. Execute with: bash install.sh