Why do you need Docker Swarm?
In the process of developing, managing, scaling and deploying your project with the use of Docker command to deploy, the small project initially only needs to run one host (vps), so there is no problem. However, when that project requires some reason or need to add more or more hosts (vps). Now you can hardly manage, scale and can not use the command to deploy to each host host (vps) it is very difficult. Knowing that heart, Docker has developed for us the so-called Docker Swarm .
What is Docker Swarm?
Docker Swarm is a native clustering tool for Docker. Allows you to group several Docker hosts together into a cluster (cluster) and we can see it as a virtual Docker server (virtual Docker host). And a Swarm is a cluster of one or more Docker Engines running. And Swarm mode provides us with features for cluster management and coordination.
Features Docker Swarm
- Cluster management integrated with Docker Engine : Cluster management with Docker Engine using Docker CLI to create swarm.
- Decentralized design: Docker Swarm is designed as a hierarchy. Instead of dealing with the differences between node roles at the time of deployment, the Docker Engine handles any specialization at runtime. You can deploy both node types: managers and workers using Docker Engine.
- Declarative service model: The Docker Engine uses a declaration method to allow you to define the desired state of the various services in your application stack. For example, you can describe the application including: web front-end with service message queueing and database back-end.
- Scaling: For each service you can specify the number of tasks you want to run. When you scale up or down, the swarm manager will automatically add or remove tasks to maintain the desired state.
- Desired state reconciliation: Imagine you set up a service run 10 replicas of a container and a worker machine (host / vps) holding 2 of those 10 replicas that have crashed, then the swarm manager will proceed to create more 2 new replicas to replace those 2 replicas that crashed and move these 2 new replicas to the running workers.
- Multi-host networking: You can specify an overlay network for your services. The Swarm manager will automatically assign IP addresses to containers on the overlay network when it initializes and updates the application.
- Service discovery: Swarm manager node assigns each service in swarm a unique DNS and you can query through this DNS.
- Load balancing : Can expose ports for services to load balance to communicate with the outside.
- Secure by default: Services communicate with each other via TLS security protocol. You can customize using a root character certificate or certificate from a custom root CA.
- Rolling updates: Swarm helps you to update image of service automatically. The Swarm manager helps you control the delay between service deploy to different nodes and you can roll back at any time.
Includes managers and workers . Users can declare the desired state of many services to run in Swarm using YAML files.
- Swarm: is a cluster of one or more Docker Engine is running (specifically the nodes) in Swarm mode, instead of having to run containers by command, we will set up services to allocate replicas to nodes.
- Node: A node is a physical or virtual machine running the Docker Engine version in Swarm mode. Nodes will have two types: Manager Node and Worker Node .
- Manager Node: A node that receives define services from the user, it manages and coordinates tasks to the Worker nodes. By default, the node Manager is also considered the Worker node.
- Worker Node: is the node that receives and executes tasks from the Manager node.
- Service: A service that determines the image of the container and the number of replicas (copies) you want to launch in the swarm.
- Task: is a task that a node worker must perform. This task will be allocated by the Manager node. A task carries a Docker Container and commands to run inside the container.
Works with Docker Swarm
In this section we will practice with Docker Swarm through a small demo. First we need 4 virtual machines (virtual vps) to create virtual machines we use the following command:
$ docker-machine create <machine-name>
- <machine-name>: the virtual machine name you want to set.
Create machine (virtual machine) for swarm manager:
$ docker-machine create manager
Next are the machines for swarm workers, respectively: worker1, worker2, worker3.
$ docker-machine create worker1
$ docker-machine create worker2
$ docker-machine create worker3
After creating, we check the list machine:
$ docker-machine ls
Now we use the inspect command to see the information of a machine
$ docker-machine inspect manager
It is easy to see some basic information about the machine such as IP address, MachineName (the name we set), SSHKey to be able to access the machine through this SSHKey, information about CPU (1 CPU), Memory (1GB) , …
The setup of the machines has completed now we proceed to initialize the swarm on the manager and to access the manager or the workers, we use via SSH as follows:
$ docker-machine ssh <name-machine>
- <name-machine> = manager
And to return to the local host:
$ docker swarm init --advertise-addr <IP Machine>
If you are using Docker Desktop for Mac or Docker Desktop for Windows then just docker swarm init . But here the Operating System is Boot2Docker, so the –advertise-addr flag is required.
Check list node currently in swarm
$ docker node ls
The node (machine / vps) is the manager to see this list and the * indicates that you are in the swarm node manager. Here we have only one node manager and this node is in Ready status. OK, got it That’s it, the task in the manager.
Now let’s move on to working on worker1 . At worker1, we join it to swarm as a worker:
$ docker swarm join --token <token> <host>:<port>
- host: The manager ip address.
- port: The port of the manager.
To get information about tokens, on that swarm’s manager we use the command
$ docker swarm join-token <worker|manager>
On two workers2 and worker3 we do the same
Note: a node worker can only join one swarm.
On the node manager, we check the list node again It is easy to see that the other 3 worker nodes have the same status as empty in the MANAGER STATUS column. This tells us that they are node workers.
So we have successfully created 3 workers and 1 manager and gathered them into a swarm (cluster).
The question that arises here is why here we do not take advantage of the swarm we created in Part 3 at the local host ( Docker Desktop for Mac ) and treat it as a node manager to join other nodes. This swarm creates another machine to be a node manager for such resource cost? Then the answer is in Part 3 (mentioned very clearly) on the Docker Desktop for Mac version that cannot open the routing flow to the machines, so the fact that we try to join the nodes (machine / vps) into the swarm with the manager swarm is Local host is useless. This is also a weakness when deploying networking on OSX.
Now we continue to create services and replicas as well as deploy on node manager.
To do this we need to configure the docker-compose.yml file:
and copy the docker-compose.yml file that we configured to the manager side:
$ docker-machine scp filesource name-machine:/path-docker-machine/
In this demo:
$ docker-machine scp ~/Workspace/gocode/docker-swarm-demo/docker-compose.yml manager:/home/docker/docker-compose.yml
Next we need to push 2 images that in Part 2 we used on the repository on hub.docker:
$ docker tag <image> <username>/<repository-name>:<tag-name>
$ docker push <username>/<repository-name>
- <image>: Id image you want to push
- <username>: is the username on your hub.docker.
- <repository-name>: name of the repository you want to put.
- <tag-name>: The tag name you want to give the image to be pushed onto it.
On the Docker Hub
So we successfully pushed 2 images and now we need to deploy the stack:
$ docker stack deploy -c /home/docker/docker-compose.yml swarm-demo-app
Check list services:
Let’s see which nodes the replicas are running on:
Alternatively, you can create a service using the command with the following syntax:
$ docker service create --replicas <task-number> --name <service-name> <ID-Image> <command>
- <task-number>: the number of tasks you want to create (in other words, the number of copies of the image / container).
- <service-name>: the name of the service you want to set.
- <ID-Image>: ID of image / container.
- <command>: the command you want to run.
And we can change the container number of the cluster quickly with the following command:
$ docker service scale <service-name>=<number>
- <service-name>: service name that we want to change the number of containers.
- <number>: The number of containers desired.
Next we will see if the load balancing feature works, okay?
We see on the worker3 node that there is no replicas of the servergo_1 service. We proceed to send a test request to servergo_1 service on this worker3 see okay!
$ curl http://192.168.99.103:8080/api/v1/foods?id=2
This means when we send requests to the nodes in the swarm. These nodes may contain one or more replicas of services or do not contain any replicas, the swarm Routing mesh will forward those requests through ingress network to the Swarm Load Balancer, this balancer will allocate the request to the The container of services on machines (host / vps of manager and worker) shares the same swarm network. You can see the following image to better understand:
Retry with other requests:
Now let’s try to shut down the machine worker1 (as in fact when a server is down) to see if there is anything new.
$ docker-machine stop <machine-name>
- <machine-name> = worker1
Check the list of nodes and services on the node manager Nothing new to see outside of worker1 being Down
Continue to check on each service
Here we have seen something new. When worker1 was at this point the swarm Shutdown Manager will proceed to create new replicas instead add 1 to 1 replicas have been shutdown and initiate a transfer to this new one for the worker replicas are run (namely worker3). This is also the Desired state reconciliation feature , Scaling is mentioned in the Docker Swarm Feature section.
So the problem arises here when the entire worker node dies, what happens after that?
In this case, the node manager will also conduct more replicas to ensure enough replicas that we have configured (expected) and run on the same manager (that is, the node manager will act as the node worker too). And if this manager dies, everything will be over !!.
Otherwise, if the worker nodes are running but the manager node is dead, the external storage will record that and notify the remaining manager nodes in the cluster. And external storage will choose any node manager to be the next leader of the cluster. Currently accompanying Docker Swarm we have another friend, Kubernetes (K8S) . And it is more widely deployed than Docker Swarm. In the next part we will discover some interesting things about it !!
If you want to see quality posts, or discuss knowledge, share your insights with the world, join our group on Facebook : ^^