In this blog post, I’ll describe how to install Docker, Docker Swarm, play with some common commands and create a simple PHP site running on Apache and nginx as a load balancer. I’ll use two servers, one manager and one worker. The shared storage for the Apache web servers will be on a third server running as a NFS server. For the first part, I’ll work on the manager server only and when we create our first service in the Swarm, we’ll use the worker server. The installation for a manager and a worker is the same.
Table of Contents
Install
Each OS has a different way of installing Docker, so it’s better to check the official documentation before doing the install. In my case I’ll use CentOS 7. I am logged as the root user, but you can do this as a regular user if using sudo as prefix. I’ll also create a user called docker and add it to the docker group, which Docker creates it when installed.
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum -y install docker-ce systemctl enable docker systemctl start docker useradd docker -g docker
Change the password for the docker user and log in as docker. In order to see if everything works OK, list the containers. You shouldn’t get any, but you’ll see something like this.
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Docker runs containers based on images, so let’s pull an image.
docker pull ubuntu:latest
You can list the images using:
docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 93fd78260bd1 2 weeks ago 86.2MB
Let’s run a container based on that image. We’ll run the container in an interactive mode (-it).
docker container run -it ubuntu:latest /bin/bash root@250a109cc857:/# hostname 250a109cc857
As you can see, when we typed hostname, we got the hostname from the container, not the underlying host. You can exit from the interactive shell, by typing CTRL-P then CTRL-Q.
We can list the running containers, using:
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 250a109cc857 ubuntu:latest "/bin/bash" About a minute ago Up About a minute nifty_mahavira
As you can see, the running container has a random name, in my case nifty_mahavira. When you interact with the containers, you can use their names or their IDs. If you want, you can name your container too.
docker container run --name myfirstcontainer -it ubuntu:latest /bin/bash
Now, type exit this time to exit out of the container. Once out, type docker ps -a. This command will show both running and stopped containers (-a = all).
docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7947da7460e5 ubuntu:latest "/bin/bash" 7 minutes ago Exited (0) 7 minutes ago myfirstcontainer 250a109cc857 ubuntu:latest "/bin/bash" 2 hours ago Up 2 hours nifty_mahavira
As you can see, there are two containers. One of them is running, the other one is stopped. When we type exit, the container stops. If we type CTRL-P, CTRL-Q, the container runs in the background. Some containers (e.g. nginx, Apache) can be started with -d switch, which makes them run in a daemon mode, meaning they run in the background.
Let’s connect to the runnning container again. As you can see, we are back at the initial prompt.
docker attach nifty_mahavira
Do CTRL-P, CTRL-Q again.
We can see the logs for the container.
docker logs nifty_mahavira root@250a109cc857:/# hostname 250a109cc857
If we make a change into the container and then stop it, our changes will be lost. So, we can create an image of a running container with our changes.
Attach to the first container again and type touch myfile.txt. This command will create an empty file in the container file system.
docker attach nifty_mahavira root@250a109cc857:/# touch myfile.txt
Exit with CTRL-P, CTRL-Q and create the image.
docker commit -m "Created an empty file" -a "[email protected]" nifty_mahavira klimenta/myfirstimage sha256:0e00e0694e596a5c85aa57d9a89797ac6872626207b315df62c0a0142704e57d
If you list the images now, you’ll see our first image there.
docker images REPOSITORY TAG IMAGE ID CREATED SIZE klimenta/myfirstimage latest 0e00e0694e59 53 seconds ago 86.2MB ubuntu latest 93fd78260bd1 2 weeks ago 86.2MB
If we create a container from our image, we can see that the file is there.
docker container run -it klimenta/myfirstimage /bin/bash root@bd33bb413ada:/# ls -l myfile.txt -rw-r--r-- 1 root root 0 Dec 5 19:36 myfile.txt
You can exit the container now, exit or CTRL-PQ. Doesn’t matter.
You can get more info about the image, if you type:
docker inspect klimenta/myfirstimage
Let’s say that you don’t need the image anymore. You can delete an image with:
docker rmi klimenta/myfirstimage Error response from daemon: conflict: unable to remove repository reference "klimenta/myfirstimage" (must force) - container 2223cba9c2da is using its referenced image 0e00e0694e59
This error means that we have a container based on that image and the container’s ID is 2223cba9c2da.
You can find that container by typing:
docker ps -a | grep 2223cba9c2da 2223cba9c2da klimenta/myfirstimage "/bin/bash" 4 minutes ago Exited (2) 4 minutes ago condescending_goldstine
First, remove the container.
docker rm 2223cba9c2da 2223cba9c2da
Then, delete the image.
docker rmi klimenta/myfirstimage Untagged: klimenta/myfirstimage:latest Deleted: sha256:0e00e0694e596a5c85aa57d9a89797ac6872626207b315df62c0a0142704e57d Deleted: sha256:a216cafc1ec085642f38b89c4bf6cf63fec65bc9097fe141f69d5f81797f9892
If you want to remove all containers, do:
docker rm `docker ps -a -q`
Make sure that all containers are stopped. If you want to remove all running container regardless of their running state, add the –force suffix to the command above.
We mentioned how to create our own image by using commit command. Another way of doing it is by using the Dockerfile. There is a lot to read about the Dockerfile. Here I’ll present a small Dockerfile to build an image based on ubuntu. So, create a file called Dockerfile and add these lines.
#This is a custom image FROM ubuntu:latest LABEL maintainer "[email protected]" RUN apt-get update RUN apt-get install -y openssh-server
Once this file is saved, create the image. Don’t forget the dot at the end.
docker build -t klimenta/mysecondimage:v1 .
It will take some time, but you’ll have the image ready. You can verify with docker images.
Docker Swarm
Docker Swarm is a cluster manager and an orchestrator. We’ll install it on the manager node and on the worker node. First, we have to initialize the swarm.
docker swarm init Swarm initialized: current node (qgtrimmoxuhdk639kgqiuevsy) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-5qna1cjc7p8xeom7u1xdzr5kjzkdgrt87wilsd8c93b4py75vy-03ll63wt9c8nk0vvs7ckgrerv 192.168.48.128:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
If we want to see the nodes in the cluster, we can do:
docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION qgtrimmoxuhdk639kgqiuevsy * manager Ready Active Leader 18.09.0
To join a node as a worker, we’ll have to execute the command that was given above. In case, you cleared the screen, you can always get the worker token back by doing:
docker swarm join-token worker To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-5qna1cjc7p8xeom7u1xdzr5kjzkdgrt87wilsd8c93b4py75vy-03ll63wt9c8nk0vvs7ckgrerv 192.168.48.128:2377
So, log in on the worker node and execute the command from the above. You should get confirmation that This node joined a swarm as a worker. If you try to see the nodes from the worker, you’ll get an error saying “Error response from daemon: This node is not a swarm manager. Worker nodes can’t be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.” So, you’ll have to do docker node ls from the manager, or on the worker you can type:
docker info | grep Swarm Swarm: active docker info | grep Manager Is Manager: false Manager Addresses:
and at least see that the swarm is active and that this node is not a manager.
If you want to remove the node from the swarm, you can use the command:
docker node rm <nodeID>
If you get an error that the worker is still up and running, you can add the –force suffix. Do this only when the worker node is not available, let’s say there is a network issue preventing it to login to it, or the node crashed and it’s not coming back.
Preferably, you would like to leave the swarm first. So, on the worker node, do:
docker swarm leave Node left the swarm.
And then remove the node from the manager with docker node rm
docker node rm 5g96otbl7lbbffc2l29s5g6lr 5g96otbl7lbbffc2l29s5g6lr
Let’s go with our service example now, but before doing that add the worker node back to the swarm. We’ll create a new service that runs on all nodes (including the manager, which is not recommended). But, first thing first. Let’s create a shared volume for the Apache servers. Follow this link to create the server. On the manager and the worker, install the NFS package so Docker can mount an NFS share.
yum -y install nfs-utils
On the NFS server, create a file where you created the NFS share. In my case I have a directory /nfs.
echo '<?php echo gethostname(), "\n";?>' > /nfs/index.php
Define the variable that we’ll pass to the Docker command when creating the service. Replace with your values and the IP of the NFS server.
export NFS_VOL_NAME=nfs NFS_LOCAL_MNT=/var/www/html NFS_SERVER=192.168.48.131 NFS_SHARE=/nfs NFS_OPTS=vers=4,soft
Now, we can create the service from an image from Docker Hub that comes with Apache and PHP.
docker service create -p 8080:80 --mount "src=$NFS_VOL_NAME,dst=$NFS_LOCAL_MNT,volume-opt=device=:$NFS_SHARE,\"volume-opt=o=addr=$NFS_SERVER,$NFS_OPTS\",type=volume,volume-driver=local,volume-opt=type=nfs" -d --name webserver --replicas 3 php:7.2-apache
So, I named my service webserver and I have 3 replicas. I am exposing port 80 on the containers as port 8080 on the Docker host. The reason is because we’ll have our nginx listening on port 80 and acting as a load balancer.
You can list the services.
docker service ls ID NAME MODE REPLICAS IMAGE PORTS id7362udkbj0 webserver replicated 3/3 php:7.2-apache *:8080->80/tcp
You can see the details for this particular service.
docker service ps webserver --no-trunc ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS xjhzzm8e9ai3h33qdq5rnhsvm webserver.1 php:7.2-apache@sha256:e3488a95726dd01a29056348b18e3461bc89ea7512742cb2c05ca9cb5f445c24 worker Running Running about a minute ago 09apro4je3xnsn1zlmfvw83xg webserver.2 php:7.2-apache@sha256:e3488a95726dd01a29056348b18e3461bc89ea7512742cb2c05ca9cb5f445c24 manager Running Running about a minute ago 43uzdyo23vrl5p1057wrcibpw webserver.3 php:7.2-apache@sha256:e3488a95726dd01a29056348b18e3461bc89ea7512742cb2c05ca9cb5f445c24 manager Running Running about a minute ago
You can check the individual containers on each node.
docker ps -a
You can attach to a single container. Replace with your ID.
docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1833bab6a41e php:7.2-apache "docker-php-entrypoi…" 3 minutes ago Up 3 minutes 80/tcp webserver.3.43uzdyo23vrl5p1057wrcibpw 7d4a8fd7cad7 php:7.2-apache "docker-php-entrypoi…" 3 minutes ago Up 3 minutes 80/tcp webserver.2.09apro4je3xnsn1zlmfvw83xg docker exec -it 1833bab6a41e /bin/bash root@1833bab6a41e:/var/www/html# cat index.php <?php echo gethostname(), "\n";?> root@1833bab6a41e:/var/www/html# exit exit
If you open up a browser on a machine outside of the swarm, you can check our first webserver. It will display the hostname of the Apache server that serves the content. Docker Swarm also acts as a load balancer but it’s only a TCP load balancer, not an HTTP LB.
If you go on the NFS server and create a small script loop.sh, you can see how the swarm load balances the load.
cat loop.sh #!/bin/bash while : do curl http://192.168.48.128:8080 sleep 1 done ./loop.sh 7d4a8fd7cad7 4b42e663f797 1833bab6a41e 7d4a8fd7cad7 4b42e663f797 ^C
Now we need an nginx service to balance the load. In order to do that, we’ll create our own image with a specific configuration file.
Create a new Dockerfile and add the following:
# A custom config nginx load balancer FROM nginx:latest LABEL maintainer "[email protected]" COPY nginx.conf /etc/nginx/nginx.conf
In the same directory where you create the Dockerfile, create a file called nginx.conf and add the following lines.
events { worker_connections 1024; } http { upstream localhost { least_conn; server 192.168.48.128:8080; server 192.168.48.130:8080; } server { listen 80; server_name localhost; location / { proxy_pass http://localhost; } } }
Where 192.168.48.128 and 192.168.48.130 are the manager’s and the worker’s IPs. Now, you can create the custom image.
docker built -t lbtemplate .
And provision a service from that image template.
docker service create --name loadbalancer -d -p 80:80 lbtemplate:latest
If you check the services with docker service ls and see that all replicas are up, you can access the load balancer with http://192.168.48.128 and you’ll get a response from one of the web servers.