How is docker networking built and operating?

Tram Ho

One of the reasons Docker is such a powerful platform is because you can easily connect containers or their services together, or you can connect to non-Docker services. Docker containers or services do not need to know if they are deployed in Docker, or whether the services that work with them are Docker services. Regardless of the hosts running on Linux, Windows, or a combination of the two, we can use Docker to manage them – meaning Docker is platform-independent. Docker can bring all of the above in part to the networking system it builds. In this article, we will together learn about the basic concepts and principles that form networking systems in Docker, how to use different types of networks in docker (drivers), thereby helping You can have a clearer view in application design and deployment by making full use of the capabilities of the docker network.

Components to know in docker networking

Docker networking is composed of the three most important main components:

  1. Container network model (CNM): is a detailed design guide or can be considered a networking design standard for the container system, it defines the basic blocks needed to make up the Docker network.
  2. libnetwork : is an implementation of CNM, and is used by Docker, written in the Go language and fully implements CNM’s core components.
  3. The drivers: are custom implementations of the CNM model for different network models, helping us to apply in each different use case.

Container network model (CNM)

Everything about the Docker network starts from its design guide – CNM, as mentioned above, it defines the basic components / blocks in the docker network, including 3 main blocks:

  • Sandbox: is a standalone network stack, it includes Ethernet interfaces, routing tables, and DNS config.
  • Endpoint: is a virtual network interface, just like the network interfaces that operate on our computer, its task is to create network connections. In CNM’s design, the endpoint’s job is to connect sandboxes to the network (the last block in CNM).
  • Network: is a software implementation of the switch (also known as 802.1d bridge), its task is to group and separate a set of endpoints that need to communicate with each other.

We will go into a practical example to be able to better understand the relationship of the components in CNM with containers. In the figure below, we can see that all containers (A, B) have a sandbox placed in it to provide network connectivity.

Container B has 2 network interfaces (which are the endpoints) and connects to networks A and B. The two containers can communicate with each other because they are both connected to network A, while the 2 endpoints of container B cannot be delivered. next to each other because they are not on the same network. Since endpoints are like network adapters, each endpoint can only connect to a single network, so if a container wants to connect to more than one network at a time, it needs more than one endpoint (as is the case with container B). in the above example). We can also see in the example above that containers A and B operate on the same docker host, but their network stack is completely separate in the OS via the sandbox.


libnetwork is an implementation of CNM, it is an open-source written in Go, cross-platform and used by Docker.

From the early days of Docker, all the implementations of CNM were in the docker daemon, but until it got too big and didn’t follow the standard Unix module design rules, it was split into a library. separate and that is the way the libnetwork is conceived.

In addition to implementing components included in CNM, it also has other functions such as service discovery, ingress-base container load balancing (load balancing mechanism in docker swarm), network control plane, management plane (which helps to manage network on docker. host).


libnetwork can be considered as an abstract class defining the components in CNM, the networking management function for the docker host, and the drivers are implementations specific for each different use. Or to say the other way the driver provides true connectivity and separate networks from each other. The correlation between driver and libnetwork is shown in the figure below.

In Docker there are a number of built-in drivers, called native drivers or local drivers:

  • On Linux include: bridge, overlay, macvlan.
  • On the Window include: nat, overlay, transparent, 12bridge.

Some third-party drivers played above can also be used in docker, they are called remote drivers. Some typical names can be mentioned such as: calico, contiv, kuryv …

Each of the above drivers is responsible for creating, managing, and deleting resources on networks of its kind. For example overlay driver will be responsible for creating, adding, removing resources in overlay networks.

The drivers defined above can also work at the same time in order to be able to build complex structural models for the needs of the user. In the next part of this article, we will look at some of the popular drivers that are commonly used in docker.

Single-host bridge network

This is the simplest type of network model in the docker network. As the name implies, the single-host bridge network type is created and managed by a bridge driver on Linux, but with Windows this driver is called nat (the model and the way it works exactly the same).

With bridge drivers, networks operating in this mode will only connect to containers on the same host, and it simulates the operation of a layer 2 switch ( 802.1d bridge ).

The figure below shows two docker hosts with containers running on a bridge network with the same name as mynet but cannot be connected to each other because they are actually on two different networks on two different hosts.

brigde driver is the default driver when you create a network with the docker network create command without specifying a driver for it. When the docker installation is successful, we always have a ready-made bridge network available. For Linux, that network is called bridge , and on Windows, that network is called nat . (Picture below)

We can use the docker network inspect [tên network] command to get more information about the newly created network.

bridge network is based on the linux bridge that has been in the Linux kernel architecture for more than 15 years, which means it has very high efficiency and is extremely stable.

We can inspect the network bridge available on Linux when we installed docker

and you can use the ip link show command to list out the network device

We can see here that the docker map’s default bridge network with the linux bridge named docker0 is in the Linux kernel, from which it is possible to map to an Ethernet interface on the docker host via its mapping port.

Returning to the example with mynetwork is the network bridge we created earlier, let’s use the brctl command to list the existing brigde on the system. We can see the result below, in addition to docker0 is the bridge of the default network, br-0dd0064ef821 is the bridge map with the network mynetwork we created.

Now we will try to create a container using the network mynetwork .

You can run inspect again to make sure the container you just created is up and running on the mynetwork network

Run the brctl show command again, and you will see the newly created bridge is attached to the interface (endpoint) of the container.

We can run another container and ping the old one by its name. For example:

Thus, with containers created and using the same network bridge, it is possible to connect directly to each other without going through a mapping port or address mechanism at all. Additionally, we can ping directly to the name of the container as a hostname since all newly created containers are registered with a built-in DNS service in the docker, so they can be mapped. the name of the container as their ip if the two containers are running on the same network.

Again verify the bridge of mynetwork is mapped with the endpoint of the newly created container c2

Multi-hosts overlay network

I plan to have a separate article to explain the operation of the docker’s overlay or overlay network model, so this part of the article will explain the brief idea and main purpose of the overlay network.

Overlay network can operate on multi-hosts. It allows a network to span multiple docker hosts, so containers on those docker hosts can communicate with each other through layer 2.

Ovelay network is the default network when we run docker in swarm mode (running a cluster of docker hosts), and it can scale very easily, by just running a few simple commands.

To create a network overlay, we just need to run the docker network create command with the options driver set to -d overlay .

I will make an example so that we can visually see how the overlay network works. It is important to note that network overlay can only be created when your docker host is swarm joined, or has been in init mode swarm. In this section, I will not explain in detail the swarm commands, we will pay attention to the results when running the container through the overlay network.

So I will choose to make my example through Play with docker .

Here I create 2 docker instances:

After the creation is complete, at the instance named node1 (in the image), I will init swarm mode by running the command:

After this docker host has become the manager of a swarm, it will print the screen the command join swarm to the worker node. Copy this command and run at node2 to join swarm.

After we have two nodes joining the same swarm, in node 1 being the node manager, we will create a network with overlay driver, note that there must be an extra --attachable to be able to create a container running on this network.

After we have the network overlay, we will proceed to create 2 containers as in the example in the bridge network section, but this time at 2 different hosts (node1 and node 2):

After the creation is complete, we can stand from the container on any node and ping the other container, we can see that the ping command was successful, which means that the container has been connected to each other from 2 hosts. There is no need to use any port or address mapping methods, this is one of the things that makes docker swarm or kubernetes powerful.

Host networking

host network mode does not separate the network stack of the container from the network of the host running docker which means that the containers running this mode and the docker host will share the same network [namespace] / overview / # the-underlying-technology . For example, if you run a container and bind it to port 80 and use host mode networking or host driver, the application in that container will actually be accessible via port 80 with the docker host’s IP.

Because the container shares the same network namespace, when you use the host network, you won’t be able to use port mapping when running docker run or using docker-compose.

host mode can be used to optimize system performance because it does not need to use NAT mechanism to communicate between container and request from outside docker host.

host mode can only be used on hosts running Linux, so with hosts running on Windows, Windows Server or Mac you won’t be able to create a network running this mode.


So in this article, I have introduced to you the components of docker networking, the drivers and the purpose of each type. Hopefully after this article, you have a clearer view and better understanding of the application of each type of network when deploying your product with docker. In the following articles on the topic of docker, I will introduce more about other aspects of the docker networking that I have mentioned but have not had time to present it here such as: ingress load balancing or service directory. ..


Share the news now

Source : Viblo