Install and configure Kubernetes cluster with kubeadm and security hardening

Intermediate 45 min Apr 01, 2026 21 views
Ubuntu 24.04 Ubuntu 22.04 Debian 12 AlmaLinux 9 Rocky Linux 9 Fedora 41

Set up a production-ready Kubernetes cluster using kubeadm with proper security hardening, RBAC configuration, and CNI networking. Includes worker node setup and verification steps.

Prerequisites

  • At least 2GB RAM per node
  • 2 CPU cores minimum
  • Root or sudo access
  • Network connectivity between nodes

What this solves

This tutorial helps you create a production-grade Kubernetes cluster using kubeadm with security best practices. You'll learn to set up control plane nodes, join worker nodes, configure networking, and implement security hardening measures including RBAC and network policies.

Step-by-step installation

Update system and install prerequisites

Start by updating your system and installing required packages for container runtime and networking.

sudo apt update && sudo apt upgrade -y
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
sudo dnf update -y
sudo dnf install -y curl gnupg2 software-properties-common device-mapper-persistent-data lvm2

Configure system prerequisites

Disable swap and configure kernel modules required for Kubernetes networking.

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
overlay
br_netfilter
sudo modprobe overlay
sudo modprobe br_netfilter
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
sudo sysctl --system

Install containerd runtime

Install and configure containerd as the container runtime for Kubernetes.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y containerd.io
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y containerd.io
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Install Kubernetes components

Add the Kubernetes repository and install kubeadm, kubelet, and kubectl.

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable kubelet

Initialize the control plane

Initialize the Kubernetes control plane with security-focused configuration.

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.29.0
controlPlaneEndpoint: "203.0.113.10:6443"
networking:
  serviceSubnet: "10.96.0.0/12"
  podSubnet: "192.168.0.0/16"
apiServer:
  extraArgs:
    audit-log-maxage: "30"
    audit-log-maxbackup: "3"
    audit-log-maxsize: "100"
    audit-log-path: "/var/log/audit.log"
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction"
controllerManager:
  extraArgs:
    bind-address: "0.0.0.0"
scheduler:
  extraArgs:
    bind-address: "0.0.0.0"
etcd:
  local:
    extraArgs:
      listen-metrics-urls: "http://127.0.0.1:2381"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "203.0.113.10"
  bindPort: 6443
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
serverTLSBootstrap: true
protectKernelDefaults: true
sudo kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml

Configure kubectl access

Set up kubectl configuration for the current user to interact with the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Security note: The admin.conf file contains cluster administrator credentials. Protect it with chmod 600 and never share it.

Install Calico CNI

Deploy Calico as the Container Network Interface for pod networking and network policies.

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
    - blockSize: 26
      cidr: 192.168.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}
kubectl create -f /tmp/calico-custom.yaml

Join worker nodes

Generate and use the join command to add worker nodes to your cluster.

kubeadm token create --print-join-command

Run the output command on each worker node:

sudo kubeadm join 203.0.113.10:6443 --token  --discovery-token-ca-cert-hash sha256:

Configure RBAC security

Create restricted service accounts and roles following the principle of least privilege.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: restricted-user
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
  • apiGroups: [""]
resources: ["pods"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: read-pods namespace: default subjects:
  • kind: ServiceAccount
name: restricted-user namespace: default roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
kubectl apply -f /tmp/rbac-config.yaml

Implement network policies

Create default network policies to restrict pod-to-pod communication.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-namespace
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: default
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: default
kubectl apply -f /tmp/network-policy.yaml

Configure Pod Security Standards

Enable Pod Security Standards to enforce security policies at the namespace level.

kubectl label namespace default pod-security.kubernetes.io/enforce=restricted
kubectl label namespace default pod-security.kubernetes.io/audit=restricted
kubectl label namespace default pod-security.kubernetes.io/warn=restricted

Verify your setup

Check that all cluster components are running and nodes are ready.

kubectl get nodes -o wide
kubectl get pods -n kube-system
kubectl get pods -n calico-system
kubectl cluster-info

Test basic functionality by deploying a simple workload:

kubectl create deployment nginx-test --image=nginx:1.25
kubectl expose deployment nginx-test --port=80 --type=ClusterIP
kubectl get pods,svc
kubectl delete deployment nginx-test
kubectl delete service nginx-test

Common issues

SymptomCauseFix
Nodes stuck in NotReadyCNI not installed or misconfiguredCheck CNI pods: kubectl get pods -n calico-system
kubeadm init failsSwap enabled or ports blockedDisable swap and check firewall rules for ports 6443, 2379-2380
Pods stuck in PendingControl plane has NoSchedule taintRemove taint: kubectl taint nodes --all node-role.kubernetes.io/control-plane-
Network policies blocking trafficDefault deny-all policy activeCreate specific allow policies for required communication
kubectl connection refusedWrong kubeconfig or API server downCheck sudo systemctl status kubelet and verify kubeconfig

Next steps

Automated install script

Run this to automate the entire setup

#kubernetes #kubeadm #k8s-cluster #container-orchestration #kubernetes-security

Need help?

Don't want to manage this yourself?

We handle infrastructure for businesses that depend on uptime. From initial setup to ongoing operations.

Talk to an engineer