One of the reasons Docker is such a powerful platform is because you can easily connect containers or their services together, or you can connect to non-Docker services. Docker containers or services do not need to know if they are deployed in Docker, or whether the services that work with them are Docker services. Regardless of the hosts running on Linux, Windows, or a combination of the two, we can use Docker to manage them – meaning Docker is platform-independent. Docker can bring all of the above in part to the networking system it builds. In this article, we will together learn about the basic concepts and principles that form networking systems in Docker, how to use different types of networks in docker (drivers), thereby helping You can have a clearer view in application design and deployment by making full use of the capabilities of the docker network.
Components to know in docker networking
Docker networking is composed of the three most important main components:
- Container network model (CNM): is a detailed design guide or can be considered a networking design standard for the container system, it defines the basic blocks needed to make up the Docker network.
libnetwork
: is an implementation of CNM, and is used by Docker, written in the Go language and fully implements CNM’s core components.- The drivers: are custom implementations of the CNM model for different network models, helping us to apply in each different use case.
Container network model (CNM)
Everything about the Docker network starts from its design guide – CNM, as mentioned above, it defines the basic components / blocks in the docker network, including 3 main blocks:
- Sandbox: is a standalone network stack, it includes Ethernet interfaces, routing tables, and DNS config.
- Endpoint: is a virtual network interface, just like the network interfaces that operate on our computer, its task is to create network connections. In CNM’s design, the endpoint’s job is to connect sandboxes to the network (the last block in CNM).
- Network: is a software implementation of the switch (also known as 802.1d bridge), its task is to group and separate a set of endpoints that need to communicate with each other.
We will go into a practical example to be able to better understand the relationship of the components in CNM with containers. In the figure below, we can see that all containers (A, B) have a sandbox placed in it to provide network connectivity.
Container B has 2 network interfaces (which are the endpoints) and connects to networks A and B. The two containers can communicate with each other because they are both connected to network A, while the 2 endpoints of container B cannot be delivered. next to each other because they are not on the same network. Since endpoints are like network adapters, each endpoint can only connect to a single network, so if a container wants to connect to more than one network at a time, it needs more than one endpoint (as is the case with container B). in the above example). We can also see in the example above that containers A and B operate on the same docker host, but their network stack is completely separate in the OS via the sandbox.
Libnetwork
libnetwork
is an implementation of CNM, it is an open-source written in Go, cross-platform and used by Docker.
From the early days of Docker, all the implementations of CNM were in the docker daemon, but until it got too big and didn’t follow the standard Unix module design rules, it was split into a library. separate and that is the way the libnetwork
is conceived.
In addition to implementing components included in CNM, it also has other functions such as service discovery, ingress-base container load balancing (load balancing mechanism in docker swarm), network control plane, management plane (which helps to manage network on docker. host).
Drivers
libnetwork
can be considered as an abstract class defining the components in CNM, the networking management function for the docker host, and the drivers are implementations specific for each different use. Or to say the other way the driver provides true connectivity and separate networks from each other. The correlation between driver and libnetwork
is shown in the figure below.
In Docker there are a number of built-in drivers, called native drivers or local drivers:
- On Linux include: bridge, overlay, macvlan.
- On the Window include: nat, overlay, transparent, 12bridge.
Some third-party drivers played above can also be used in docker, they are called remote drivers. Some typical names can be mentioned such as: calico, contiv, kuryv …
Each of the above drivers is responsible for creating, managing, and deleting resources on networks of its kind. For example overlay driver will be responsible for creating, adding, removing resources in overlay networks.
The drivers defined above can also work at the same time in order to be able to build complex structural models for the needs of the user. In the next part of this article, we will look at some of the popular drivers that are commonly used in docker.
Single-host bridge network
This is the simplest type of network model in the docker network. As the name implies, the single-host bridge network type is created and managed by a bridge
driver on Linux, but with Windows this driver is called nat
(the model and the way it works exactly the same).
With bridge
drivers, networks operating in this mode will only connect to containers on the same host, and it simulates the operation of a layer 2 switch ( 802.1d bridge ).
The figure below shows two docker hosts with containers running on a bridge
network with the same name as mynet
but cannot be connected to each other because they are actually on two different networks on two different hosts.
brigde
driver is the default driver when you create a network with the docker network create
command without specifying a driver for it. When the docker installation is successful, we always have a ready-made bridge network available. For Linux, that network is called bridge
, and on Windows, that network is called nat
. (Picture below)
We can use the docker network inspect [tên network]
command to get more information about the newly created network.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | hungnv <span class="token variable">@hungnv</span> <span class="token punctuation">:</span> <span class="token operator">~</span> $ docker network inspect mynetwork <span class="token punctuation">[</span> <span class="token punctuation">{</span> <span class="token string">"Name"</span> <span class="token punctuation">:</span> <span class="token string">"mynetwork"</span> <span class="token punctuation">,</span> <span class="token string">"Id"</span> <span class="token punctuation">:</span> <span class="token string">"0dd0064ef821e2c8d6bbddb7f179d53168aadd943a52e5c9763db227b48e4f70"</span> <span class="token punctuation">,</span> <span class="token string">"Created"</span> <span class="token punctuation">:</span> <span class="token string">"2021-03-14T20:47:19.797031178+07:00"</span> <span class="token punctuation">,</span> <span class="token string">"Scope"</span> <span class="token punctuation">:</span> <span class="token string">"local"</span> <span class="token punctuation">,</span> <span class="token string">"Driver"</span> <span class="token punctuation">:</span> <span class="token string">"bridge"</span> <span class="token punctuation">,</span> <span class="token string">"EnableIPv6"</span> <span class="token punctuation">:</span> <span class="token boolean">false</span> <span class="token punctuation">,</span> <span class="token string">"IPAM"</span> <span class="token punctuation">:</span> <span class="token punctuation">{</span> <span class="token string">"Driver"</span> <span class="token punctuation">:</span> <span class="token string">"default"</span> <span class="token punctuation">,</span> <span class="token string">"Options"</span> <span class="token punctuation">:</span> <span class="token punctuation">{</span> <span class="token punctuation">}</span> <span class="token punctuation">,</span> <span class="token string">"Config"</span> <span class="token punctuation">:</span> <span class="token punctuation">[</span> <span class="token punctuation">{</span> <span class="token string">"Subnet"</span> <span class="token punctuation">:</span> <span class="token string">"172.18.0.0/16"</span> <span class="token punctuation">,</span> <span class="token string">"Gateway"</span> <span class="token punctuation">:</span> <span class="token string">"172.18.0.1"</span> <span class="token punctuation">}</span> <span class="token punctuation">]</span> <span class="token punctuation">}</span> <span class="token punctuation">,</span> <span class="token string">"Internal"</span> <span class="token punctuation">:</span> <span class="token boolean">false</span> <span class="token punctuation">,</span> <span class="token string">"Attachable"</span> <span class="token punctuation">:</span> <span class="token boolean">false</span> <span class="token punctuation">,</span> <span class="token string">"Ingress"</span> <span class="token punctuation">:</span> <span class="token boolean">false</span> <span class="token punctuation">,</span> <span class="token string">"ConfigFrom"</span> <span class="token punctuation">:</span> <span class="token punctuation">{</span> <span class="token string">"Network"</span> <span class="token punctuation">:</span> <span class="token string">""</span> <span class="token punctuation">}</span> <span class="token punctuation">,</span> <span class="token string">"ConfigOnly"</span> <span class="token punctuation">:</span> <span class="token boolean">false</span> <span class="token punctuation">,</span> <span class="token string">"Containers"</span> <span class="token punctuation">:</span> <span class="token punctuation">{</span> <span class="token punctuation">}</span> <span class="token punctuation">,</span> <span class="token string">"Options"</span> <span class="token punctuation">:</span> <span class="token punctuation">{</span> <span class="token punctuation">}</span> <span class="token punctuation">,</span> <span class="token string">"Labels"</span> <span class="token punctuation">:</span> <span class="token punctuation">{</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> <span class="token punctuation">]</span> |
bridge
network is based on the linux bridge that has been in the Linux kernel architecture for more than 15 years, which means it has very high efficiency and is extremely stable.
We can inspect the network bridge
available on Linux when we installed docker
1 2 3 | hungnv <span class="token variable">@hungnv</span> <span class="token punctuation">:</span> <span class="token operator">~</span> $ docker network inspect bridge <span class="token operator">|</span> grep bridge <span class="token punctuation">.</span> name <span class="token string">"com.docker.network.bridge.name"</span> <span class="token punctuation">:</span> <span class="token string">"docker0"</span> <span class="token punctuation">,</span> |
and you can use the ip link show
command to list out the network device
1 2 3 4 | hungnv <span class="token variable">@hungnv</span> <span class="token punctuation">:</span> <span class="token operator">~</span> $ ip link show docker0 <span class="token number">4</span> <span class="token punctuation">:</span> docker0 <span class="token punctuation">:</span> <span class="token operator"><</span> <span class="token constant">NO</span> <span class="token operator">-</span> <span class="token constant">CARRIER</span> <span class="token punctuation">,</span> <span class="token constant">BROADCAST</span> <span class="token punctuation">,</span> <span class="token constant">MULTICAST</span> <span class="token punctuation">,</span> <span class="token constant">UP</span> <span class="token operator">></span> mtu <span class="token number">1500</span> qdisc noqueue state <span class="token constant">DOWN</span> mode <span class="token constant">DEFAULT</span> group default link <span class="token operator">/</span> ether <span class="token number">02</span> <span class="token punctuation">:</span> <span class="token number">42</span> <span class="token symbol">:dc</span> <span class="token punctuation">:</span> <span class="token number">1</span> a <span class="token symbol">:f3</span> <span class="token punctuation">:</span> <span class="token number">58</span> brd ff <span class="token symbol">:ff</span> <span class="token symbol">:ff</span> <span class="token symbol">:ff</span> <span class="token symbol">:ff</span> <span class="token symbol">:ff</span> |
We can see here that the docker map’s default bridge network with the linux bridge named docker0
is in the Linux kernel, from which it is possible to map to an Ethernet interface on the docker host via its mapping port.
Returning to the example with mynetwork
is the network bridge we created earlier, let’s use the brctl
command to list the existing brigde on the system. We can see the result below, in addition to docker0
is the bridge of the default network, br-0dd0064ef821
is the bridge map with the network mynetwork
we created.
1 2 3 4 5 | hungnv <span class="token variable">@hungnv</span> <span class="token punctuation">:</span> <span class="token operator">~</span> $ brctl show bridge name bridge id <span class="token constant">STP</span> enabled interfaces br <span class="token operator">-</span> <span class="token number">0</span> dd0064ef821 <span class="token number">8000.0242</span> d94afdc4 no docker0 <span class="token number">8000.0242</span> dc1af358 no |
Now we will try to create a container using the network mynetwork
.
1 2 | $ docker container run <span class="token operator">-</span> it <span class="token operator">--</span> name c1 <span class="token operator">--</span> network mynetwork alpine sh |
You can run inspect again to make sure the container you just created is up and running on the mynetwork
network
1 2 3 | $ docker network inspect <span class="token punctuation">-</span> <span class="token punctuation">-</span> format <span class="token string">'{{json .Containers}}'</span> mynetwork <span class="token punctuation">{</span> <span class="token string">"676356b3770e05fedc645aa0ba83701cbb2c8023f1a22efcca56789dbc46983d"</span> <span class="token punctuation">:</span> <span class="token punctuation">{</span> <span class="token string">"Name"</span> <span class="token punctuation">:</span> <span class="token string">"c1"</span> <span class="token punctuation">,</span> <span class="token string">"EndpointID"</span> <span class="token punctuation">:</span> <span class="token string">"0a457a8390f87e93cf32db6e620258df169f6dd6fbe0b23fe357e0611f97502d"</span> <span class="token punctuation">,</span> <span class="token string">"MacAddress"</span> <span class="token punctuation">:</span> <span class="token string">"02:42:ac:12:00:02"</span> <span class="token punctuation">,</span> <span class="token string">"IPv4Address"</span> <span class="token punctuation">:</span> <span class="token string">"172.18.0.2/16"</span> <span class="token punctuation">,</span> <span class="token string">"IPv6Address"</span> <span class="token punctuation">:</span> <span class="token string">""</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> |
Run the brctl show
command again, and you will see the newly created bridge is attached to the interface (endpoint) of the container.
1 2 3 4 5 | $ brctl show bridge name bridge id <span class="token constant">STP</span> enabled interfaces br <span class="token operator">-</span> <span class="token number">0</span> dd0064ef821 <span class="token number">8000.0242</span> d94afdc4 no veth70c1a60 docker0 <span class="token number">8000.0242</span> dc1af358 no |
We can run another container and ping the old one by its name. For example:
1 2 3 4 5 6 7 8 9 10 | $ docker container run <span class="token operator">-</span> it <span class="token operator">--</span> name c2 <span class="token operator">--</span> network mynetwork alpine sh <span class="token operator">/</span> <span class="token comment"># ping c1</span> <span class="token constant">PING</span> c1 <span class="token punctuation">(</span> <span class="token number">172.18</span> <span class="token number">.0</span> <span class="token number">.2</span> <span class="token punctuation">)</span> <span class="token punctuation">:</span> <span class="token number">56</span> data bytes <span class="token number">64</span> bytes from <span class="token number">172.18</span> <span class="token number">.0</span> <span class="token number">.2</span> <span class="token punctuation">:</span> seq <span class="token operator">=</span> <span class="token number">0</span> ttl <span class="token operator">=</span> <span class="token number">64</span> time <span class="token operator">=</span> <span class="token number">0.203</span> ms <span class="token number">64</span> bytes from <span class="token number">172.18</span> <span class="token number">.0</span> <span class="token number">.2</span> <span class="token punctuation">:</span> seq <span class="token operator">=</span> <span class="token number">1</span> ttl <span class="token operator">=</span> <span class="token number">64</span> time <span class="token operator">=</span> <span class="token number">0.182</span> ms <span class="token operator">^</span> <span class="token constant">C</span> <span class="token operator">--</span> <span class="token operator">-</span> c1 ping statistics <span class="token operator">--</span> <span class="token operator">-</span> <span class="token number">2</span> packets transmitted <span class="token punctuation">,</span> <span class="token number">2</span> packets received <span class="token punctuation">,</span> <span class="token number">0</span> <span class="token operator">%</span> packet loss round <span class="token operator">-</span> trip min <span class="token operator">/</span> avg <span class="token operator">/</span> max <span class="token operator">=</span> <span class="token number">0.182</span> <span class="token operator">/</span> <span class="token number">0.192</span> <span class="token operator">/</span> <span class="token number">0.203</span> ms |
Thus, with containers created and using the same network bridge, it is possible to connect directly to each other without going through a mapping port or address mechanism at all. Additionally, we can ping directly to the name of the container as a hostname since all newly created containers are registered with a built-in DNS service in the docker, so they can be mapped. the name of the container as their ip if the two containers are running on the same network.
Again verify the bridge of mynetwork
is mapped with the endpoint of the newly created container c2
1 2 3 4 5 6 | $ brctl show bridge name bridge id <span class="token constant">STP</span> enabled interfaces br <span class="token operator">-</span> <span class="token number">0</span> dd0064ef821 <span class="token number">8000.0242</span> d94afdc4 no veth0c33fd9 veth70c1a60 docker0 <span class="token number">8000.0242</span> dc1af358 no |
Multi-hosts overlay network
I plan to have a separate article to explain the operation of the docker’s overlay or overlay
network model, so this part of the article will explain the brief idea and main purpose of the overlay network.
Overlay network can operate on multi-hosts. It allows a network to span multiple docker hosts, so containers on those docker hosts can communicate with each other through layer 2.
Ovelay network is the default network when we run docker in swarm mode (running a cluster of docker hosts), and it can scale very easily, by just running a few simple commands.
To create a network overlay, we just need to run the docker network create
command with the options driver set to -d overlay
.
I will make an example so that we can visually see how the overlay network works. It is important to note that network overlay can only be created when your docker host is swarm joined, or has been in init mode swarm. In this section, I will not explain in detail the swarm commands, we will pay attention to the results when running the container through the overlay network.
So I will choose to make my example through Play with docker .
Here I create 2 docker instances:
After the creation is complete, at the instance named node1 (in the image), I will init swarm mode by running the command:
1 2 | docker init swarm <span class="token operator">--</span> advertise <span class="token operator">-</span> addr <span class="token operator">=</span> <span class="token number">192.168</span> <span class="token number">.0</span> <span class="token number">.18</span> |
After this docker host has become the manager of a swarm, it will print the screen the command join swarm to the worker node. Copy this command and run at node2 to join swarm.
1 2 | docker swarm join <span class="token operator">--</span> token <span class="token constant">SWMTKN</span> <span class="token operator">-</span> <span class="token number">1</span> <span class="token operator">-</span> <span class="token number">3</span> owjly8x6t5icj6sehmmapit1pp10kwvw2ls4f078oe47jqtrg <span class="token operator">-</span> aovwwsmc58r2jibrgob20c3sq <span class="token number">192.168</span> <span class="token number">.0</span> <span class="token number">.18</span> <span class="token punctuation">:</span> <span class="token number">2377</span> |
After we have two nodes joining the same swarm, in node 1 being the node manager, we will create a network with overlay
driver, note that there must be an extra --attachable
to be able to create a container running on this network.
1 2 | docker network create -d overlay --attachable myoverlaynetwork |
After we have the network overlay, we will proceed to create 2 containers as in the example in the bridge
network section, but this time at 2 different hosts (node1 and node 2):
1 2 | docker container run <span class="token operator">-</span> it <span class="token operator">--</span> name <span class="token punctuation">[</span> c1 hoặc c2 <span class="token punctuation">]</span> <span class="token operator">--</span> network myoverlaynetwork alpine sh |
After the creation is complete, we can stand from the container on any node and ping the other container, we can see that the ping command was successful, which means that the container has been connected to each other from 2 hosts. There is no need to use any port or address mapping methods, this is one of the things that makes docker swarm or kubernetes powerful.
Host networking
host
network mode does not separate the network stack of the container from the network of the host running docker which means that the containers running this mode and the docker host will share the same network [namespace] https://docs.docker.com/get-started / overview / # the-underlying-technology . For example, if you run a container and bind it to port 80 and use host
mode networking or host
driver, the application in that container will actually be accessible via port 80 with the docker host’s IP.
Because the container shares the same network namespace, when you use the host
network, you won’t be able to use port mapping when running docker run
or using docker-compose.
host
mode can be used to optimize system performance because it does not need to use NAT mechanism to communicate between container and request from outside docker host.
host
mode can only be used on hosts running Linux, so with hosts running on Windows, Windows Server or Mac you won’t be able to create a network running this mode.
Conclude
So in this article, I have introduced to you the components of docker networking, the drivers and the purpose of each type. Hopefully after this article, you have a clearer view and better understanding of the application of each type of network when deploying your product with docker. In the following articles on the topic of docker, I will introduce more about other aspects of the docker networking that I have mentioned but have not had time to present it here such as: ingress load balancing or service directory. ..