In this post I’ll explain how to install MetalLB which is a load balancer for bare metal/VM Kubernetes servers. If you have your own home lab to play with Kubernetes and you are not using AKS, EKS or GKE, then this post is for you.
The prerequisite is to have a running Kubernetes cluster. In my earlier post, I’ve described how to install Kubernetes with CRI-O and Cilium as a CNI, so you can follow that post or feel free to have your own install. Mind that if you use other CNIs such as Calico or Weave, there are some things you have to check first and see if it applies to you. Look at this link.
We’ll install MetalLB now. You probably have your k8s master and the nodes ready.
Do this first on the master.
kubectl edit configmap -n kube-system kube-proxy
…and change strictARP to true.
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" ipvs: strictARP: true
Deploy MetalLB.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml
You’ll see the pods running in metallb-system namespace.
kubectl get pods -n metallb-system NAME READY STATUS RESTARTS AGE controller-7d56b4f464-gkfrq 1/1 Running 0 55s speaker-l5954 1/1 Running 0 55s speaker-xrswp 1/1 Running 0 55s speaker-z8vdt 1/1 Running 0 55s
Let’s configure our load balancer to give IPs in this 192.168.1.50-192.168.1.60 range. Change to your needs accordingly in line 8.
Save it as a lb-config.yaml file.
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.168.1.50-192.168.1.60 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: example namespace: metallb-system
Deploy the config.
kubectl apply -f lb-config.yaml
Now, we can deploy a simple app and a load balancer. Save the file as demo.yaml.
This is an app that I made in Node.js that listens on port 3000 to display the IP of the pod that the load balancer hits.
You can use it for any type of a load balancer, not just MetalLB. It’s a super simple app that prints the IP of the pod where it’s running.
apiVersion: apps/v1 kind: Deployment metadata: name: demo spec: replicas: 6 selector: matchLabels: run: demo template: metadata: labels: run: demo spec: containers: - name: demo image: klimenta/serverip ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: loadbalancer spec: ports: - port: 80 targetPort: 3000 protocol: TCP type: LoadBalancer selector: run: demo
Deploy the app.
kubectl apply -f demo.yaml
Check the load balancer service.
kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33m loadbalancer LoadBalancer 10.106.36.58 192.168.1.50 80:30770/TCP 4m33s
…and if you go to that IP (192.168.1.50), you’ll see the IP of the pods that the load balancer hits.
Install nginx ingress reverse proxy
The problem with the scenario above is that for every service you need to expose, you need a different IP from the IP pool that we’ve assigned to MetalLB (.50-.60).
A better solution is to install an nginx ingress that will act as a reverse proxy, but for this you’ll need a working DNS server as the services will share the same IP, but different FQDN.
So, in practice it looks like this.
Install nginx ingress first.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml kubectl get pods -n ingress-nginx kubectl get service ingress-nginx-controller -n=ingress-nginx
Don’t worry if the two nginx ports are not running and it says – “completed”.
You’ll see the external IP that’s assigned to nginx from MetalLB.
For some reason, nginx ingress won’t work without this line. Many people have this problem in a bare-metal environment and this is the workaround.
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
Now, let’s create an nginx web server. This has nothing to do with nginx ingress, it’s completely separate.
I am exposing the nginx deploymenty and creating a rule that will hit the deployment on port 80 with nginx.homelab.local as FQDN.
NOTE: If your browser can’t resolve this URL, it won’t work. So, make sure you have a DNS or /etc/hosts file or some type of name resolution.
kubectl create deployment nginx --image=nginx --port=80 kubectl expose deployment nginx kubectl create ingress nginx --class=nginx --rule nginx.homelab.local/=nginx:80
Then, let’s install Apache server that runs on the same IP and same port 80.
kubectl create deployment httpd --image=httpd --port=80 kubectl expose deployment httpd kubectl create ingress httpd --class=nginx --rule httpd.homelab.local/=httpd:80
If everything is OK, you’ll be able to access both web servers using their URLs.
If you want to put Grafana and/or KubeCost behind nginx ingress, read these short posts here and here.