Home  >  Blog  >   Docker

Networking in Docker

Rating: 5
  
 
3650

Overview of Networking in Docker

Networking in Docker is the means through which containers communicate with each other and external workloads. The most distinctive and flexible feature of Docker containers is the ability to network with the workloads that might or might not be of Docker in nature. Containers also work independently of the nature of the host they are deployed on, be it Windows, Linux, Mac OS, or a mix of two.

Want to Get Docker Certification Training Course From Experts? Enroll Now For Free Demo On  Docker Training.

The scope of this article

This article gives the basic information on networking in Docker and Drivers for networking. It helps you do some basic exercises on networking, like Listing all networks, creating a new network, and Inspecting a network, each with an example. It also explains the details of networking on each of the network types, restricting the explanation to the default types. 

Basics Networks in Docker

When a Docker is installed, three networks are created automatically for the Docker. They are the bridge, none, and the host. You can see these networks using the command, docker network ls.

A docker networking is a pluggable system. You can plug a network for a Docker using the corresponding driver. Several Drivers are present by default and provide core networking. For the default networks (bridge, none, and host), the corresponding drivers present are the bridge, null, and the host.

When you want to specifically provide a network to your container, you can use the flag,  --network to specify your choice.

BRIDGE

The bridge network is the docker0 network present on Dockers and the Daemon connects all containers to this network by default. If you run the command, ipaddr show on the host network, you can see the bridge, as the default network displayed.

Note: ipconfig is a deprecated command now. You can alternatively use ip a as a shorthand notation of ipaddr show.

The daemon connects all the containers of the host to bridge by default, by creating 2 virtual peer interfaces, where one of the interfaces becomes the eth0 of the container and the other in the namespace of the host. An IP address will be assigned to the bridge whilst, the interfaces are created.

NONE

When you explicitly specify the network to be none, the containers are added to another stack called a none stack. This lacks a specific network interface. This is useful in two cases:

  • When the container needs no networking such as while doing batch jobs.
  • When you want to set up custom networking.

HOST

The host network adds the containers to the network stack called the host. This stack keeps no isolation between the containers and the host machine. Since the container shares the namespace with the host, it is exposed to the public directly. This effectively implies that the container and the host share the same IP address. Since there is no overhead routing involved in the networking, this is faster than bridge networking. However, the security implications are to be considered here, as it is directly exposed to the public.

When it comes to the possibility of configuring the network, bridge, and user-defined bridge networks are only available. The none and host networks are not yet configurable on Docker.

MindMajix Youtube Channel

Basic Network Drivers

Drivers make the networking subsystem pluggable on Dockers. There are few drivers that exist by default, while few are user-created. Below are some of the default drivers present, and are explained in brief with their functionalities.

  • bridge: This is the default driver. When you don't specify the driver you are using, this is the default network created.  Bridge networks are generally useful in standalone container applications, for communication.
  • Host: This driver is for host networking. On the Standalone containers, remove the isolation and allow for the networking directly with the host. This is available for swarm services on Docker 17.06 and higher.
  • Overlay: This is used in a distributed system where multiple Docker daemon hosts are involved in communication. It enables the networking between multiple docker daemons by establishing the swarm service or between a swarm service and a standalone container or between 2 standalone containers on different docker daemons. It removes the effort of the OS level routing for communication between these containers.
  • Macvlan: These drivers assign a MAC address to a container, and making it appear like a physical device on the network. The docker daemon routes the data, mapping the MAC addresses of the containers.
  • none: It means no networking. Not applicable to swarm services.

Basic Operations in Networking

Below is a list of basic operations used in Docker Networking, which are essential to be known, in order to establish successful networks on Docker systems.

Listing All Docker Networks

Command Syntax: docker network ls

Options: None

Return Value: Displays all the networks connected to the Host.

Example

Command: $ docker network ls

Output:

NETWORK ID       NAME             DRIVER
7fca4eb8c647     bridge           bridge
9f904ee27bf5     none             null
cf03ee007fb4     host             host

Inspecting a Docker Network

Inspecting a network lets you get more details of a particular network of interest on the Host. For example, you will get information on the containers connected to the network with port and IP address details.

To inspect, you have to specify the name of the network. In the below syntax, "networkname" should be replaced by the network name of choice.

Command Syntax: docker network inspect networkname

Options: networkname should be the name of the target network

Return Value: Returns all the details associated with the network.

Example:

Command: $ docker network inspect the bridge

Output:
[
   {
   "Name": "bridge",
    "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
    "Scope": "local",
    "Driver": "bridge",
    "IPAM": {
        "Driver": "default",
        "Config": [
            {
                "Subnet": "172.17.0.1/16",
                "Gateway": "172.17.0.1"
            }
        ]
    },
    "Containers": {},
    "Options": {
           "com.docker.network.bridge.default_bridge": "true",
        "com.docker.network.bridge.enable_icc": "true",
           "com.docker.network.bridge.enable_ip_masquerade": "true",
           "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
           "com.docker.network.bridge.name": "docker0",
        "com.docker.network.driver.mtu": "9001"
    },
    "Labels": {}
   }
]

Note: IP address will be different in your results.

Creating your own Network

You can create a custom network for your container before launching it. This can be done using the following command.

You can specify the driver for the network to be hosted in the "drivername" space.

Command Syntax:docker network create –-driver drivername name

Options:“drivername” substitutes for the name of the driver used for the network.

name” substitutes for the name of the network to be created.

Return Value:It  will return a long string ID of the new network created.

Example:

Command: $ docker network create –-driver bridge new_nw

Output:

f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f

Now, you can attach the new network while you launch the container. This can be done using the following command for an Ubuntu container:

Command:$ docker run –it –network=new_nwubuntu:latest /bin/bash

In order to see the details, inspect the network and see that the container is attached to this network.

Checkout Docker Tutorials

Ways of Doing Networking in Docker

Once we are aware of employing basic commands to create or inspect a network on Docker, let us go a step ahead to understand ways of networking in Docker. This enables us to understand how different Docker architectures can be dealt within networking. Having said that, there are 2 ways of doing networking in Docker, they being:

  • Networking on a Single Host
  • Networking on a cluster of 2 or more hosts

Networking on a Single Host

On a single host, generally, the containers communicate with each other through the IP address obtained from the default bridge network. The other ways of communication include the use of the container names as a key to connect. This is the more efficient way of communication as IP addresses will be assigned dynamically during container creation and names will be easier to use for networking.

Multi-host Networking

This is very different from Single-host Networking in terms of connection and performance. Containers can be spread across the hosts in a multi-host system. The networking will be established between the containers and the multiple hosts and among the containers of the same host. Having said this, in order to detect the hosts, service discovery plays a vital role.

Service discovery helps you to get a hostname and hence the IP address.

In order to work in a multi-host mode, you have to consider two options:

  • Docker Engine to be used in swarm mode
  • Using the key-value store to run multiple hosts which supports the Service Discovery.

Networking Tutorials

We are dealing with details of networking with only the default networking techniques. The user-defined versions of each networking technique are not handled in this content. There are 4 different types of default networking available on Docker. They are being:

  • Bridge Networking
  • Host Networking
  • Overlay Networking
  • Macvlan Networking

The section below gives an insight into networking in each of the types.

Bridge Networking

This information is restricted to the details on networking with the standalone Docker containers. This can be configured on Windows, Mac, and Linux OS.

We need two alpine containers to test networking and communication in this scenario. The pre-requisite is the installed Docker which is up and running, on your system.

Follow the steps below:

  • List the available Dockers. Use the following command to see the list of available networks on the container.

Command:$ docker network ls

Output:

NETWORK ID        NAME             DRIVER           SCOPE
17e324f45964     bridge           bridge  local
6ed54d316334     host             host             local
7092879f2cc8     none             null             local

The default bridge network is used to connect to two containers.

  • Now, start 2 alpine containers using ash, which is the default shell for the Alpine containers. The commands are as shown below:

Commands:

$ docker run -dit --name alpine1 alpine ash
$ docker run -dit --name alpine2 alpine ash

Explanation

-ditflag stands for detached, interactive, and TTY.  The flag indicates that it is detached in the background, has an interactive tool to type and TTY, as you can see the input and output right away.

Since you have started it detached, you will not be connected to the container. While, since --network flag is not used, containers connect through bridge network by default.

Now, list the containers to see that 2 containers are running successfully.

Command:$ docker container ls

CONTAINER IDIMAGE COMMANDCREATEDSTATUS
602dbf1edc81 alpine "ash"4 seconds agoUp 3 seconds  
da33b7aa74b0 alpine "ash" 17 seconds agoUp 16 seconds 

You can now inspect the bridge network, to see the details of containers connected to it.

Command:$ docker network inspect the bridge

Output:

[
{
     "Name": "bridge",
     "Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10",
     "Created": "2017-06-22T20:27:43.826654485Z",
     "Scope": "local",
     "Driver": "bridge",
     "EnableIPv6": false,
     "IPAM": {
         "Driver": "default",
         "Options": null,
         "Config": [
             {
               "Subnet": "172.17.0.0/16",
                 "Gateway": "172.17.0.1"
             }
         ]
     },
     "Internal": false,
     "Attachable": false,
     "Containers": {
            "602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": {
             "Name": "alpine2",
            "EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd",
             "MacAddress": "02:42:ac:11:00:03",
             "IPv4Address": "172.17.0.3/16",
             "IPv6Address": ""
         },
         "da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": {
             "Name": "alpine1",
             "EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5",
             "MacAddress": "02:42:ac:11:00:02",
             "IPv4Address": "172.17.0.2/16",
             "IPv6Address": ""
         }
     },
     "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
         "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
     },
     "Labels": {}
}
]

This output displays the information about the IP address of the gateway between the host and the network which is 172.17.0.1

It also shows the 2 containers with some information including the IP address of each. They being: 172.17.0.2 for alpine1 and 172.17.0.3 for alpine2.

The containers are now running in the background and are not attached. Use docker attachto connect to alpine1.

Command:$ docker attach alpine1

Output:

/ #
The "#" indicates that you have become a root user inside the container. To see more about the network interfaces of alpine1, you can use the command ipaddr show.

In order to check for the connectivity inside the alpine1, use the following command to ping google to get a response.

Command:# ping -c 2 google.com

-c 2 indicates that the attempt is limited to 2 times while pinging.

You might see a similar response

Output:

PING google.com (172.217.3.174): 56 data bytes
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.841 ms
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.897 ms
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 9.841/9.869/9.897 ms

Now, since the network connectivity is verified, try to ping the second container with its IP address.

Here is how you do it:

Command:# ping -c 2 172.17.0.3

Output:

PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.086/0.090/0.094 ms
------------------------------

Since you get a response, try connecting using the name over IP address.

Command:# ping -c 2 alpine2

Output:

ping: bad address 'alpine2'

This operation fails.

  • You can detach alpine1 without stopping it by using CTRL + p CTRL + q.
  • Stop and remove both the containers

Commands:
$ docker container stop alpine1 alpine2
$ docker container rm alpine1 alpine2

Frequently Asked Docker Interview Questions

Host Networking

This is the technique of networking where there is no isolation of networks.

In this tutorial, let's learn to start anginx container which connects to port 80 on the Docker host. This connection has the same level of isolation as having nginx process run directly on the host and not in a container. However, the storage, process namespace, and user namespace are all isolated from the host in this nginx process.

For this operation, port 80 should be free on Docker Host for connection.

Note: Host networking is currently available only on Linux, and is not supported on Windows, MacOS, or Docker EE for Windows Server.

Follow the steps below to establish a Host networking

  • Create a detached container. Run the command given below:

Command:docker run --rm -d --network host --name my_nginxnginx

Explanation

--rm - removes the container once it stops.

-d - starts the container detached(in the background, as a process).

  • Connect to nginx by browsing http://localhost:80/.

 Examining the network stack. This is done as in the following:

  • ipaddr show
  • Ensure that no new network was created.
  • Find out which process is running on port 80. To do this, use the command below:
  • sudonetstat -tulpn | grep :80
  • You have to use the keyword Sudo as the process is owned by the Docker Daemon and you will not be able to see the PID or its name otherwise.
  • Stop the container using the command:

Command: docker container stopmy_nginx

Overlay Networking

Overlay Networking is the technique that deals with the swarm of services. This can be achieved by using the default overlay network. It is the network that the Docker sets up automatically when you join a swarm. However, it is not the best option for production scenarios.

In order to configure overlay networking, you should at least have a single node swarm. This is achieved by initiating the Docker and executing the docker swarm init on the host.

In this method, you will learn about how an alpine service created will work from the individual container point of view.

Pre-requisites

This tutorial demands 3 virtual or physical Docker hosts which are connected on the same network with no firewall. All should have Docker 17.03 or higher versions installed. The 3 hosts will be referred to as the manager, worker-1, and worker-2. The manager is meant to manage the swarm and run the service while workers will just run the services.

In case you don't have three hosts, you can set them up on Ubuntu cloud services like Amazon EC2, where all the communications are allowed on the same network, and then follow installation instructions for Docker CE on Ubuntu.

Procedure for Networking

Goal:

At the end of the networking, all the Docker hosts will be joined to form the swarm and the connection will be established using an overlay network called ingress.

Steps

  • On Manager host, initialize the swarm. To do this, use the command below:

Command:$ docker swarm init --advertise-addr=

The flag, --advertise-addr is optional, in case the host has only one network interface.

The output will have a token printed. Make sure that you store it in the Password manager, as it is used to join worker-1 and worker-2 to the swarm.

  • Join the worker-1 to the swarm.

Command:$ docker swarm join --token --advertise-addr :2377

The flag, --advertise-addr is optional, in case the worker-1 has only one network interface.

  • On worker-2, employ the same procedure to join the swarm.

Command:$ docker swarm join --token --advertise-addr  :2377

  • You can now see all the nodes on the manager using:

Command: $ docker node ls

Output:

ID                         HOSTNAME          STATUS           AVAILABILITY             
d68ace5iraw6whp7llvgjpu48 *ip-172-31-34-146  Ready              Active           
Nvp5rwavvb8lhdggo8fcf7plg   ip-172-31-35-151 Ready              Active
ouvx2l7qfcxisoyms8mtkgahw  ip-172-31-36-89   Ready              Active

You can now list the network interfaces on all the nodes and check that there is an overlay network called ingress and a bridge network called docker_gwbridge listed.

The docker_gwbridge connects the ingress to the Docker host's network interface for easy traffic flow between the swarm managers and the workers. By default, any swarm service created without the network specification is connected to the ingress network. However, it is always insisted to create two separate overlay networks for a group of applications or tasks. In the following step, you will create two overlay networks, connecting two services to them.

Services and Overlay networks

1)      First, create a new overlay network on a manager called nginx-net.

To do this, use the command below::

Command:$ docker network create -d overlay nginx-net

You don't have to create overlays for the other two nodes, as they will be automatically created when the nodes run the corresponding service, requiring the overlay network.

2)      Now, in order to have an open port for all networking scenarios, create a 5-replica Nginx service on the manager, connected to nginx-net.

This service will publish port 80 as public. Hence, all other services can communicate with each other, without having to open any other ports.

Command:$ docker service create   --name my-nginx   --publish target=80,published=80   --replicas=5   --network nginx-net nginx

Note: Only on the manager, the services can be created.

If no mode is specified as an option with the flag --publish, then ingress will be considered by default.

This means that, if you browse for port 80 on any of the 3 nodes, you will be connected to one of the 5 services on port 80, even if there are no active tasks on the node you are browsing on.

3)      In order to keep a track of the service set up, use the command:

Command:docker service ls

4)      Now, inspect for nginx-net on all the 3 nodes. Note here that, you did not explicitly create the overlay network for the workers. However, Docker took care of it. Notice in the output for the containers and peers section. Containers will provide all the service tasks associated with the container which is connected to the overlay network from that host.

5)      Notice the information regarding the points and endpoints by executing the service inspect command on the manager.

Command: docker service inspect my-nginx

6)      Create a new network called nginx-net-2 and update the service to use this network. To perform the same, use the command below:

Commands:
$ docker network create -d overlay nginx-net-2
$ docker service update   --network-add nginx-net-2   --network-rmnginx-net   my-nginx

7)      Run the command docker network inspectnginx-net, to ensure that none of the containers are connected to this network.

Run docker network inspect nginx-net-2, to see that all service task containers are up and running on this network.

Note: The overlay networks are created automatically on the service tasks, however, it is not removed automatically.

8)      The last step is to clean up the services and networks. Execute the following commands on the manager. The networks on the other nodes will be removed upon instructions from the manager.

Commands:
$ docker service rm my-nginx
$ docker network rmnginx-net nginx-net-2

Macvlan Networking

In this networking, the docker daemon routes the traffic to the corresponding container based on the MAC address received.

In this procedure, we will setup the macvlan network and attach containers to it.

In order to establish this networking, make sure that you have access to your physical networking equipment, as most cloud providers block macvlan networking.

This is supported only on Linux hosts with a minimum of Version 3.9 of the Linux Kernel. It is not available for other OS like Windows, Mac, and Docker EE for Windows Server.

The example shown below assumes that the ethernet interface is eth0. If the device is configured with a different name, then use the corresponding name.

Procedure

In this example, the traffic flows through eth0 and Docker routes this to the containers based on their MAC addresses. The macvlan networking makes the containers seem to be physically attached to the network.

1)      The first step is to create a macvlan network by name my-macvlan-net. Use the following command for the same.

Command:$ docker network create -d macvlan  --subnet=172.16.86.0/24   --gateway=172.16.86.1   -o parent=eth0   my-macvlan-net

You can now list or inspect the network to ensure that the network exists and is macvlan network.

2)      Now, start an alpine container and attach it to the network created. You can refer to the command below for the same:

Command:$ docker run --rm -itd --network my-macvlan-net   --name my-macvlan-alpine   alpine:latest   ash

The  --dit flag starts the container in the background in a  detached mode.

The  --rm flag removes the container when it is stopped.

3)      Inspect the my-macvlan-alpinecontainer and see the MacAddress key within the network.

Command:$ docker container inspect my-macvlan-alpine

Output:

...truncated...
"Networks": {
  "my-macvlan-net": {
   "IPAMConfig": null,
   "Links": null,
   "Aliases": [
       "bec64291cd4c"
   ],
   "NetworkID": "5e3ec79625d388dbcc03dcf4a6dc4548644eb99d58864cf8eee2252dcfc0cc9f",
   "EndpointID": "8caf93c862b22f379b60515975acf96f7b54b7cf0ba0fb4a33cf18ae9e5c1d89",
   "Gateway": "172.16.86.1",
   "IPAddress": "172.16.86.2",
   "IPPrefixLen": 24,
   "IPv6Gateway": "",
   "GlobalIPv6Address": "",
   "GlobalIPv6PrefixLen": 0,
   "MacAddress": "02:42:ac:10:56:02",
   "DriverOpts": null
  }
}
...truncated

4)      Use docker exec commands to see how the containers see themselves in the network.

Command:$ docker exec my-macvlan-alpine ipaddr show eth0

Output:

9: eth0@tunl0: <broadcast,multicast,up,lower_up,m-down>mtu 1500 qdiscnoqueue state UP
link/ether 02:42:ac:10:56:02 brdff:ff:ff:ff:ff:ff
inet 172.16.86.2/24 brd 172.16.86.255 scope global eth0
valid_lft forever preferred_lft forever</broadcast,multicast,up,lower_up,m-down>

Command:$ docker exec my-macvlan-alpine ip route

default via 172.16.86.1 dev eth0

172.16.86.0/24 dev eth0 scope link  src 172.16.86.2

5) First, stop the container and remove the network-attached.

$ docker container stop my-macvlan-alpine 
$ docker network rm my-macvlan-net

This brings us to the end of the networking on Dockers. The fundamental knowledge on Networking is ensured through this chapter. Make sure that these concepts are well ingrained, before delving deep in other aspects of Networking in Dockers.

Explore Docker Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download Now!

 

List Of MindMajix Docker Courses:

 Kubernetes Administration
 OpenShift
 Docker Kubernetes
 OpenShift Administration
Join our newsletter
inbox

Stay updated with our newsletter, packed with Tutorials, Interview Questions, How-to's, Tips & Tricks, Latest Trends & Updates, and more ➤ Straight to your inbox!

Course Schedule
NameDates
Docker TrainingApr 27 to May 12View Details
Docker TrainingApr 30 to May 15View Details
Docker TrainingMay 04 to May 19View Details
Docker TrainingMay 07 to May 22View Details
Last updated: 03 Apr 2023
About Author

Vinod M is a Big data expert writer at Mindmajix and contributes in-depth articles on various Big Data Technologies. He also has experience in writing for Docker, Hadoop, Microservices, Commvault, and few BI tools. You can be in touch with him via LinkedIn and Twitter.

read more