The Docker containers can easily communicate through network with other containers and hosts using new network configuration technologies. The network configuration for Dockers is made more convenient with launching of these new methods. The virtual interface called docker0 is helpful for network configuration set up among Docker containers.
In this chapter, we shall discuss about virtual docker0 interface, how a Docker is configured to local networks, multiple host networking for Dockers, installation and configuration technologies used with Dockers.
Configuring Docker to the Network
While starting of Docker container, it will build docker 0, the virtual kind of interface on local host system. As per definition of RFC 1918, the Docker would choose both the subnet and address within the private ranges and then assigned to docker 0. But these addresses must never be used by host system previously. For example: Imagine that the network mask address is 172.16.42.2/16. This is capable of supplying 65,534 various addresses for Docker containers and the host system. The network mask address is made up of 16 binary bits. From the IP address of the Docker container, the Media Access Control (MAC) addresses are generated. This is mainly for preventing Address
Reservation Protocol collisions. The range that is used for generating MAC address is 02:42:ac:11:00:00 – 02:42:ac:11:ff:ff.
The interface docker 0 is not ordinary. The docker 0 is an Ethernet bridge, virtual and is used for forwarding network packets automatically through any kind of network interfaces which are attached to docker 0. This enables the communication between Docker containers within its peer groups and also with host system. Each time a Docker
container gets created, a couple of peer group interfaces are also created. This resembles two sides of a pipe- the data packets that are passed through one end would be received at the other end. This provides chance for one among the container peers to turn into ‘eth0’ and this is maintained with a distinctive name such as gehtQSN1DR. This is kept in host system’s namespace. Each interface of veth* is binded to bridge of virtual interface, docker 0. A virtual kind of subnet is shared by Docker between each Docker container and the host system.
Simple Ways to Configure Docker to the Local Area Network
There are four simple ways using which the Docker containers can easily be configured on local area network. The solutions that are discussed below are not practical but just illustrate few of the fundamental network technologies that are available with Linux Operating system.
If any user wishes to use any of the solutions rather than technology demonstrations, he might pay attention to pipework scripting which is used for automating these network configurations.
Ultimate Aims and Premises
In given examples, there is a host having IP address 220.127.116.11 on 18.104.22.168/20 network. Here we have to create Docker container which is exposed like: 22.214.171.124. If you run Fedora version 20 along with Docker version 1.1.2, the package utils-linux is brand enough for including nsenter Docker command. In case you lack that convenient
tool, a simpler Docker direction set exists to create nster command in ipetazzo/nsenter in GitHub Winows installation.
NAT for Network Configuration of Docker
The Network Address Translation or NAT makes use of the normal network model of Docker. The Docker’s network model is mingled with NAT regulations on host system for redirecting the incoming traffic to outgoing traffic from suitable IP addresses.
Allocate the target address for host interfaces:
# -ip -–addr add 126.96.36.199/20 dev em1
In order to initiate the Docker container, use –p option for binding the ports that are exposed to corresponding IP address & the port number on local host.
With the help of above command, the Docker can configure the standard model network:
- Docker will first build an interface pairing- ‘veth’.
- It connects one edge with docker 0, virtual bridge.
- Keep other end inside namespace of container with name as ‘eth0’.
- Allocates the network IP address which was used by virtual bridge, docker 0.
Since we supplemented –p 188.8.131.52: 60:60 to command line of host, the Docker shall also build the below rule in Docker chain’s nat table that is executed from PREROUTING link.
This resembles the TO traffic with target address as –d 184.108.40.206/31 which is not originating from bridge, docker 0 but directed towards tcp with port number.
You can currently access web server from any local host system connected to network using the matched IP address.
If the Docker container was made to start network connectivity with other system, the connection will be visible using IP address of the local host machine. We shall rectify this by adding SNAT command for POSTROUTING link for changing source IP address.
By using –I POSTROUTING, we can place SNAT rule above POSTROUTING link or chain. By default, the above procedure is needed as Docker already has added below rule above POSTROUTING link:
Subscribe to our youtube channel to get new updates..!
-A POST-ROUTING –s 220.127.116.11/15 ! -d 18.104.22.168/15 -j MASQUERADE
The above rule, ‘MASQUERADE’ will match the traffic with any Docker container as well as we must place our above rule as earlier as possible in POSTROUTING for making few affect.
Using these kinds of rules, the traffic to IP: 22.214.171.124 (port number: 80) is being directed to the web container and also the traffic that originates from web container must be appearing as if it came from 126.96.36.199.
Linux Bridges and Components for Docker
The past example of configuration is simpler but with few restrictions. If you are configuring a network interface with the help of DHCP or there is any application which is needed to run on similar layer two domain broadcast like any other systems on local network, the rules of NAT will not be working.
The process makes use of device with Linux bridge, which is created with the help of brctl and helps in connecting the Docker containers with physical networks directly.
We will be adding em2 to the new bridge and shift IP address to the bridge from em2. Cautions to be followed: The first time configuration is not done remotely and keeping this permanent will vary for each distribution and thus it will never be persistent kind of configuration.
This supplies the standard interface, eth0 inside Docker container, but we need to ignore this and add a different one.
OpenvSwitch Bridge for Docker Network
This is closely similar to previous method but we make use of Open vSwitch for Linux bridges. The below commands are given with assumption that Open vSwitch is already configured in your host system.
Build an Open vSwitch (OVS) bridge with the help of ‘ovs-vsctl’ command.
Next step is adding external interface.
Continue as per previous instruction set.
Caution: The OVS network configuration will persist during rebooting. This is when the host system backs up, em2 can still be br-em member which will result in lack of network connection for your local host.
Assure ovs-vsctl del-port br-em2 em2 ahead of your system reboot.
Macvlan Tool for Docker
Even this method is similar with previous other operations but for using bridges, we can create macvlan that is virtual bridge network interface that is connected through physical network interface. Unlike those two solutions, the current method never requires any disturbances for your chief networking interface.
Begin by building Docker container as like previous cases:
Build interface, macvlan united with physical interfaces:
This will create recent interface, macvlan namely ‘em2p1’ that is associated along with em2 interface. We set up this using bridge mode that will permit all the macvlan interfaces for communicating with one another.
For namespace of network container, add the new interface:
- Build the connection:
- Finally, configure routing and IP address.
- Do demonstration that web server is readily available at IP 188.8.131.52 from other host system.
The host system is not able to make communication with macvlan machines through primary interfaces. You could build other macvlan interfacing on host device. Give an appropriate address on network and set routing for your Docker containers through interface:
Multi-hosted Docker for Network
It is common to run multiple Dockers nowadays. There are many guidelines that are present online for getting started and enable you to create Docker containers in box- it may either be Mac or Linux server in projects similar to boot2docker.
There are various options available to run Dockers for multiple boxes:
- Dockers can be run separately for every box with ports exposing on private or public interfaces thus the containers will communicate with each other. It may seem complicated, raising many security problems.
- For abstracting the networks, run with better solutions such as Weave. Though desirable, this kind of project is very new and never integrates with maestro-ng and composes orchestration devices.
- Ready-to-move solutions for Flynn or Deis, docker multi-hosts are run. This may be a poor choice for users.
- Create a bridge that is shared on mesh network for boxes and obtain the Docker facilities for spawning Docker containers. Though it looks complex, it can be easily implemented practically.
Fundamentally, the following sequence of steps is performed:
- Install Docker for every server
- Then do OVS (Open vSwitch) installation for each server
- Perform network customization for automatically creating bridges or tunnels for host system in /etc/network/interface of every server
- Only a part of docker0 range of IP is handled and thus perform customization for Docker config service. This prevents IP address overlapping between containers that are created newly.
A complete mesh network is obtained while doing service restart or server reboot, using connection redundancy. The Docker assistance service is capable of spawning containers on appropriate IP address range, avoiding overlapping. This will connect with each other lacking exposure of all ports on private or public interfaces.
This is the quick glimpse of major technologies that we are using:
Let us consider that the servers are running Ubuntu Server version 14.04.02 LTS x64. You can adapt various kinds of configuration methods as follows for other operating systems.
Using this technology, you can follow various guidelines as found on official website. Let us see various types of Docker configuration and services later in this chapter.
The OpenVSwitch or OVS packages are not available with default or outdated repositories. We are now building .deb files and dispose it on various kinds of hosts. For maintaining prod boxes tidy, get a small box for installing and building dev packages.
All the guidelines for building are made available on Github’s OpenVSwitch.
For creating your own packages, perform the below, adapting to newer versions if any:
You can be creating meshed networks with several CLI devices of OpenVSwitch and Ubuntu gives you a helper for defining the network via /etc/network/interfaces. Let us consider various three servers: 184.108.40.206, 220.127.116.11 and 18.104.22.168. These servers can ping with each other with the help of these IP addresses and can either be private or public.
This configuration must be altered to use with other host systems. The IP address sets of remote_ip must be paired appropriately.