In this post, I’ll build a Kubernetes cluster with one master and two nodes using CRI-O and Cilium. I have DNS in my environment but you can use the hosts files if needed.
The OS will be Rocky Linux 9 running on ESXi 8.x but this should work on any other bare metal server too. The config for the nodes is:
– Master: 2 CPUs, 4GB RAM
– Workers: 2 CPUs, 8GB RAM.
First, we’ll have to disable swap and put SELinux in permissive mode.
Do this on all nodes (master + workers).
sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#/g' /etc/fstab sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Now, we have to open some firewall ports for Kubernetes..
Do this on the master node only.
sudo firewall-cmd --permanent --add-port=6443/tcp sudo firewall-cmd --permanent --add-port=2379-2380/tcp sudo firewall-cmd --permanent --add-port=10250/tcp sudo firewall-cmd --permanent --add-port=10259/tcp sudo firewall-cmd --permanent --add-port=10257/tcp sudo firewall-cmd --reload
Do this on the workers only.
sudo firewall-cmd --permanent --add-port=10250/tcp sudo firewall-cmd --permanent --add-port=30000-32767/tcp sudo firewall-cmd --reload
In addition, we’ll have to open the firewall ports for Cilium.
Do this on all nodes.
sudo firewall-cmd --permanent --add-port=4240/tcp sudo firewall-cmd --permanent --add-port=8472/udp sudo firewall-cmd --reload
Now, we have to make some changes that are also required. Some necessary modules and IPtables change.
Do this on all nodes.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system
We have to install CRI-O container runtime.
Do this on all nodes.
VERSION=1.22 sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.repo sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_8/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo sudo dnf -y install cri-o cri-tools sudo systemctl enable --now crio sudo systemctl status crio
Then, we have to install Kubernetes.
Do this on all nodes.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes sudo systemctl enable --now kubelet
On the master node only, initialize the cluster. Replace master.homelab.com with your master IP if you don’t use DNS.
sudo kubeadm init --control-plane-endpoint master.homelab.local:6443
You’ll get an output that says how to join the nodes, something like kubeadm join master… + a token
On the master node, add these lines.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Join the nodes to the cluster.
Do this on workers only.
sudo kubeadm join master.homelab.local:6443 --token t2yir0.7n49rn6r57msy4j4 \ --discovery-token-ca-cert-hash sha256:563360fcf60c49be91cdcf6486a4954c579a80c54503127ee0682ab8f86ec840
Check the nodes.
Do this on the master node.
kubectl get nodes NAME STATUS ROLES AGE VERSION master.homelab.local Ready control-plane 116s v1.28.2 node1.homelab.local Ready <none> 89s v1.28.2 node2.homelab.local Ready <none> 85s v1.28.2
…then check all the pods.
Do this on the master node.
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5dd5756b68-5ljgg 1/1 Running 0 2m kube-system coredns-5dd5756b68-jp8tq 1/1 Running 0 2m kube-system etcd-master.homelab.local 1/1 Running 0 2m6s kube-system kube-apiserver-master.homelab.local 1/1 Running 0 2m6s kube-system kube-controller-manager-master.homelab.local 1/1 Running 0 2m6s kube-system kube-proxy-2z64r 1/1 Running 0 2m kube-system kube-proxy-mrwjv 1/1 Running 0 96s kube-system kube-proxy-nj5q4 1/1 Running 0 100s kube-system kube-scheduler-master.homelab.local 1/1 Running 0 2m6s
Install Cilium on the master node only.
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) CLI_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} cilium install --version 1.14.2
Check the status.
cilium status --wait
You should see something like this.
/¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode) \__/¯¯\__/ Hubble Relay: disabled \__/ ClusterMesh: disabled DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 3 cilium-operator Running: 1 Cluster Pods: 2/2 managed by Cilium Helm chart version: 1.14.2 Image versions cilium quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35: 3 cilium-operator quay.io/cilium/operator-generic:v1.14.2@sha256:52f70250dea22e506959439a7c4ea31b10fe8375db62f5c27ab746e3a2af866d: 1
Then you can run the connectivity test. Some of the egress test will fail because we don’t have a public IP on the cluster.
cilium connectivity test
You should be able to run your deployments now. If you need to access the pods using URL use something like NodePort or port-forwarding. It’s very cumbersome and inefficient, but you can use a load balancer with MetalLB. Read this post to see how to do that. It’s separated from this post, because it’s a separate topic than just k8s, container runtime and CNI.