Kubernetes (k8s) is getting a lot of attention and it’s becoming more and more popular even in enterprises that are quite IT conservative. In this post I’ll explain how to install Kubernetes on a single master and one node on CentOS. Then, we’ll install the networking plugins (CNI), Flannel or Calico. At the end I’ll show an example of how to deploy a simple Node.js app and do rollout update and undoing the rollout.
NOTE ABOUT VERSIONS
For this post, there are some pre-requisites. You will need 2 servers with 2 CPUs and at least 2GB RAM. It is also recommended to have a working DNS. If you don’t have DNS in your lab, make sure you use /etc/hosts for hostname resolution, but you can get away if you use IPs only (not recommended).
Pre-requisites
On a fresh installed CentOS 7, do these pre-req commands on both the master and the node at the same time. You need to be logged as root.
Make sure SELinux is disabled.
setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
Kubernetes doesn’t like swap, so if you have it in /etc/fstab, disable the swap.
swapoff -a sed -i '/ swap / s/^\(.*\)$/#/g' /etc/fstab
Enable the bridge network module.
modprobe br_netfilter echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables echo '1' > /proc/sys/net/bridge/bridge-nf-call-ip6tables
Install Docker and change the cgroup from cfsgroup to systemd.
yum -y install yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum -y install docker-ce mkdir /etc/docker cat <<EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF mkdir -p /etc/systemd/system/docker.service.d systemctl daemon-reload systemctl enable docker && systemctl start docker
Try this line and make sure the output says systemd.
docker info | grep -i cgroup Cgroup Driver: systemd
Add the Kubernetes repo.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
Master Node
Open the firewall ports.
firewall-cmd --add-port=6443/tcp --permanent firewall-cmd --add-port=2379-2380/tcp --permanent firewall-cmd --add-port=10250-10252/tcp --permanent firewall-cmd --reload
From the repo install kubelet, kubectl and kubeadm and make sure Kubernetes starts on boot.
yum -y install kubelet kubectl kubeadm systemctl enable kubelet
Don’t start Kubernetes yet. It will fail with a message that it can’t find a config yaml file. Just initialize the cluster. This will also start the kubelet service. Pick one choice (Flannel or Calico).
NOTE: This line initializes the cluster to be used for Flannel.
kubeadm init --pod-network-cidr=10.244.0.0/16
NOTE: This line initializes the cluster to be used for Calico.
kubeadm init --pod-network-cidr=192.168.0.0/16
Look at the bottom of the output. You should see something like this. Lines 5,6,7 and 15 and 16 are important.
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.175:6443 --token 0z3iov.28rz29hxw9ft4jmg \ --discovery-token-ca-cert-hash sha256:782d37b5c870ebebd2f17cc3ba1424f305e6a3e293afc04fc2030edfab6bf4b0
This means that the cluster initialized OK.
Check the status of both Docker and Kubernetes.
systemctl status docker | grep Active systemctl status kubelet | grep Active
Make sure they are both running. Check /var/log/messages if you have any issues.
Nodes (workers)
On the worker nodes, make sure you do the same as you did on the master (swap, SELinux, Docker) except that you don’t have to install kubectl.
yum -y install kubelet kubeadm systemctl enable kubelet
Open the firewall ports.
firewall-cmd --add-port=10250/tcp --permanent firewall-cmd --add-port=30000-32767/tcp --permanent firewall-cmd --reload
Now, you can join the cluster. Use the command that was the output from the kubeadm init on the master (see above – lines 15 and 16).
kubeadm join 192.168.1.175:6443 --token 0z3iov.28rz29hxw9ft4jmg \ --discovery-token-ca-cert-hash sha256:782d37b5c870ebebd2f17cc3ba1424f305e6a3e293afc04fc2030edfab6bf4b0
That’s how you join nodes to the master. Replace 192.168.1.175 with the IP or hostname of your master node. If everything is OK, you’ll see something like this.
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Kubernetes user
While still logged as root on the master, create the Kubernetes user that you will use for managing the k8s cluster. In my case, I’ll create a user called k8s with secret as password.
useradd k8s -g docker usermod -aG wheel k8s echo -e "secret\nsecret" | passwd k8s
Log as this user (k8s) and execute these commands. These lines were also an output of the kubeadm init command above (5,6 and 7).
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
If you want to add another user to manage the Kubernetes cluster, make sure you execute these 3 lines above for that user. Check if everything looks good.
docker ps
You should see a bunch of Kubernetes system containers running (etcd, scheduler, API server).
Then check the nodes.
kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster.andreev.local NotReady master 12m v1.14.0 k8snode1.andreev.local NotReady <none> 10m v1.14.0
The reason the master and the node are not ready is because we don’t have a network for the cluster.
Network CNI
Depending on how you’ve initialized the cluster, pick one of the network plugins (Flannel or Calico).
Flannel
For the network to work, we’ll have to use one of the CNI plugins. There are many, Flannel, Weave Net, Calico etc.
Let’s install Flannel. Do this on the master only logged as k8s user. The master will take care of the nodes.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Start this little infinite loop and you’ll see that after 20-30 seconds, both the master and the node will change their status to Ready. Hit Ctrl-C to end.
while true do kubectl get nodes sleep 3 done
Now, you have a fully working cluster ready.
Calico
For the network to work, we’ll have to use one of the CNI plugins. There are many, Flannel, Weave Net, Calico etc.
Let’s install Flannel. Do this on the master only logged as k8s user. The master will take care of the nodes.
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Start this little infinite loop and you’ll see that after 20-30 seconds, both the master and the node will change their status to Ready. Hit Ctrl-C to end.
while true do kubectl get nodes sleep 3 done
Now, you have a fully working cluster ready.
Deployment
In this example, I’ll create a small container that runs a Node.js app that when run it will display “Hello from ” the hostname of the container. On top of that we’ll create a load balancer, so we can see how that works.
First, let’s create the container based on Node.js image. Create a file named Dockerfile with this content.
FROM node:latest LABEL maintainer "[email protected]" ADD appv1.js /app.js ENTRYPOINT ["node", "app.js"]
This is our application. Save it as appv1.js.
const http = require('http'); const os = require('os'); const port = 3000; const server = http.createServer((req, res) => { res.statusCode = 200; res.end('Hello from ' + os.hostname() + '\n'); }); server.listen(port);
Create the container. You’ll need a valid Docker Hub login. In my case, my username is klimenta. Replace it with yours.
docker build -t klimenta/appv1:latest .
It’s time to login to Docker Hub and upload the image there. You’ll be prompted for a username and password.
docker login
Upload the image.
docker push klimenta/appv1:latest
Create a Kubernetes deployment file named deployment.yaml.
apiVersion: apps/v1beta1 kind: Deployment metadata: name: appv1 spec: replicas: 3 template: metadata: name: appv1 labels: app: appv1 spec: containers: - image: klimenta/appv1:latest name: nodejs --- apiVersion: v1 kind: Service metadata: name: loadbalancer spec: type: LoadBalancer selector: app: appv1 ports: - port: 80 targetPort: 3000
We are creating a deployment with 3 replicas and a load balancer that listens on port 80 and sends the traffic to port 3000 on the pods with our application.
Create the deployment and the load balanced service.
kubectl create -f deployment.yaml
After about 30 seconds, you’ll see that your pods are ready.
kubectl get pods NAME READY STATUS RESTARTS AGE appv1-596dd64666-4k7qn 1/1 Running 0 93m appv1-596dd64666-gn5gr 1/1 Running 0 93m appv1-596dd64666-vv9h5 1/1 Running 0 93m
The load balancer is also ready.
kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m loadbalancer LoadBalancer 10.111.212.41 <pending> 80:30310/TCP 94m
If you hit the load balancer, you’ll see a response. Replace the IP with yours.
curl http://10.111.212.41 Hello from appv1-596dd64666-gn5gr
Now, let’s say that we created a new version of our application. Copy appv1.js as appv2.js and change appv2.js a little bit.
cp appv1.js appv2.js
The appv2.js should look like this.
const http = require('http'); const os = require('os'); const port = 3000; const server = http.createServer((req, res) => { res.statusCode = 200; res.end('Greetings from ' + os.hostname() + '\n'); }); server.listen(port);
Change the Dockerfile to look like this.
FROM node:latest LABEL maintainer "[email protected]" ADD appv2.js /app.js ENTRYPOINT ["node", "app.js"]
Build the new image and upload it to Docker Hub.
docker build -t klimenta/appv2:latest . docker push klimenta/appv2:latest
Deploy the new application.
kubectl set image deployment appv1 nodejs=klimenta/appv2:latest
If you check the app now, you’ll see that it reflects the new version.
curl http://10.111.212.41 Greetings from appv1-5d949774f5-b794k
But what if there is a bug in our application and we want to revert it back to the initial one? Easy.
kubectl rollout undo deployment appv1 deployment.extensions/appv1 rolled back curl http://10.111.212.41 Hello from appv1-596dd64666-94gqh