Understand the Basics of Docker
Docker is just like other Linux port or a socket. For most of the web applications, there is no proper application that handles the HTTP requests properly and you have all your hardware resources used up. There is no attractive layer involved. Docker overcomes the above drawbacks and it is a high-speed container technology built for using existing features of Linux kernel. Unlike hypervisors, Dockers never use entire system resources but only the kernel and the system libraries.
This chapter gives an overview of Docker basics- Docker applications, components, Docker architecture, Docker images, Docker Container, Docker registry and their working. The various other technologies used by Docker are also explained at the end of this chapter.
Docker- What it is?
Docker is a container technology, an open source technology which is used to develop, run and ship applications. The major motto to design Docker is for faster delivery of applications than the traditional technology. Using Docker, you can easily keep your applications separated from the infrastructure. Dockers even allow treating of infrastructure similar to managed application format. Dockers are helpful in fast shipping of scripts, fast testing and deployment and also minimize the cycle between running and writing of code.
Docker achieves this by mingling virtualization of lightweight container technology with tooling and workflows. This is helpful in deployment and management of all your applications easily. The key nature of Docker is that it allows a convenient way of running any of your applications safely and secured within a separate container. These application isolation and security features of Docker help in running many containers at the same time in your host computer. The Docker is of light-weight nature and runs without extra load as that of hypervisors and thus you can extract more from the hardware using Docker technology.
The basics of container virtualization are platform and tooling that are useful in many ways as follows:
Moving the applications along with their components into Docker container For future testing and development, it helps in shipping and distribution It is helpful in product deployment in the application area such as Cloud or local
Quick Delivery of Applications
Docker is best to help you in product development lifecycle. Docker technology paves way for software developers to manufacture local containers containing services and applications. This can again be integrated into deployment and integration workflows.
Example: Imagine that the software developers write their programs in local systems and share them using development stack through Docker among their friends. When ready, the developers may push their coding along with stack they developed into the testing environment. They then execute the code performing various tests needed. From testing platform, the Docker images are then pushed into the production and finally the code deployment.
Convenient Deployment and Scaling
The container based Docker platform is convenient for high portable workloads. The Docker containers are capable of running on physical or virtual systems either in data centres or in cloud environment and local system of the developer (host).
Light-weighted and highly-portable nature of Docker manage the dynamic workloads easily. Docker can be used for easy scaling and also tearing down any services and applications. Docker is of high speed which makes scaling happen real-time.
Achieving High Workloads and Product Density
Docker technology is of high speed and light-weight. It offers a feasible, cheap alternative to Virtual Machines or hypervisor systems. Docker is highly effective mainly in high density applications.
For example: Building of cloud services Platform as a Service (PaaS). Docker is also helpful for small to medium deployments if you wish to gain more out of actual resources.
Major Components of a Docker
Docker has two main components:
- Docker: Virtualization platform for open source containers.
- Docker Hub: This is a cloud service- Software as a Service (SaaS) for managing and sharing of Docker containers across applications.
The Docker architecture is client-server based. The Docker client communicates with the Docker Daemon. The Docker Daemon does major works like heavy lifting of containers, running and distributing the Docker containers. Docker Daemon runs along with Docker client in the same system. You can also connect your Docker client to any
remotely accessed Docker Daemons. Using RESTful application programming interface of sockets, both the Docker client and docker daemon communicate.
The Docker daemon runs in the local or host machine. The user can communicate with Docker daemon with the help of Docker client, avoiding direct interactions.
The Docker client is the major user interface to Docker container. The main form of Docker client for this purpose is docker binary. The Docker client will accept user commands and then communicate to and fro with Docker daemon and user.
To completely understand Docker inner working, you must have knowledge about the following three components namely:
- Docker image
- Docker registry
- Docker container
The Docker image is a template with read-only property. For example, imagine that a Docker image contains Ubuntu OS with Apache web server and has your web application installed in it. Docker images are helpful in creating Docker containers. Docker technology allows easiest and convenient way of creating new images or even updating already existing Docker images. You can also download Docker images from the Docker index, which others have created. Docker images are thus the building components of Docker container.
The Docker images are stored in Docker registries. There are both private and public registries from where you can either download or upload images. Docker Hub is the best example for public Docker registry. This has a vast collection of Docker images for user’s usage. These collections may either conation your self-created images or the
images from other users. Docker registries are termed as ‘distribution components’ of Docker technology.
Docker containers are same as directories of the file system. A Docker container is capable of holding anything that is needed for running of an application. Each of these Docker container is created using Docker images. Docker containers have features like start, stop, run, move and delete. Each Docker container is isolated from other and is
highly secured. Docker containers are termed as ‘running components’ of the Docker.
Thus the working of a Docker can be highlighted into three simple steps as follows:
1. First you need to construct Docker images to hold the desired applications.
2. The second step is creating Docker containers from the previously created Docker images.
3. Finally, share or push the Docker images using Docker registry- either your own directory or Docker Hub.
Working of a Docker Image
As already discussed, Docker images are read-only formats which are used in creating Docker containers. Each Docker image has several layers. All these layers are combined together using union file systems into a single Docker image. Union file systems will permit directories and files from various other file systems to be overlaid transparently and forms into a single file system that is coherent.
The light-weighted nature of Docker is mainly due to the presence of its layers. If you are changing a Docker image- for example, updating your application to recent version, the new layers are formed. Instead of replacing the entire image or rebuilding completely as like the cases of virtual Machines, you need to just update or add the layer undergoing change. You can even prevent the overhead of distributing entire image and just share only the updated layer. This makes the sharing and distribution of Docker images easier and faster.
Each image begins with a base image. For example: In Ubuntu, all images start from Ubuntu image and in Fedora, all images start from base Fedora image. For your own creations, you can use your own base image. If you are having an Apache server image, you can make use of it as a base image for all other web applications. Docker technology will usually obtain these base images from the public registry, Docker Hub.
Docker images are built from base images retrieved from Docker Hub using a very simple and understandable series of procedures, called ‘instructions’. Each of this instruction is useful in creating a new layer in your own image. The following actions are included in instructions:
- Run a Docker command
- Add an application file or directory
- Create a variable (environment variable)
- The process to be run while a Docker container gets launched from this Docker image
This instruction set is stored in ‘Docker file’. When you send request for building an image, the Docker reads the Docker file for executing the available instructions and your final image gets returned.
Working of a Docker Registry
Docker registry is the storage space for the Docker images. After building Docker images, they can either be pushed into Docker Hub, the public registry or your own private registry that runs behind system firewall.
You can browse already available Docker images using Docker client. You can download those published images to your own Docker container using PULL command.
Docker Hub is capable of providing both private as well as public image storage. In public storage registry, the published images can be downloaded by any user; whereas in private store only you and your set of privileged users can download and use the Docker images into your host containers.
Working of a Docker Container
A Docker container has user files, operating system and meta-data information. Each Docker container is created from a Docker image. That Docker image holds information about the container content, process that has to be run while launching the container and other container data. The Docker image has read-only property and cannot be modified. When Docker builds its container from a Docker image, a read-write layer is inserted at the image top with the help of union file system. Your application will then run here.
Running a Docker Container
You can run the Docker container using either API or docker binary. The Docker client acts as an intermediate which insists Docker Daemon to run the Docker container.
$ docker run -i -t fedora /bin/bash
The Docker client gets launched by docker binary using ‘run’command which tells Docker client to create a new Docker container. The Docker client just needs to trigger Docker Daemon to run the Docker container.
- Which base Docker image is used for building Docker container? In the above case, it is fedora, fedora base image.
- The command that is desired to run inside Docker container after its launch. In the above example it is bash command to run inside bin.
What will happen when you run the container?
In order to run the container, Docker manages the following:
Pulls or downloads base image, ubuntu image: Docker will first look for ubuntu image in the Docker index of host and if it is not found, Docker will PULL it from the Docker Hub. If the image is previously present, it will be used by Docker for new Docker container.
New container creation: After fetching the base Docker image, the Docker container is built.
File system allocation and mounting of read-write layer: A file system is allocated for the container and a read-write layer is added on top of the Docker image.
Allocating bridge/network interface: Docker creates a network interface which facilitates the communication of Docker container with the local host.
IP address set up: An available IP address is found from address pool and attached to the application.
Executes the desired process: Runs your specified processes.
Captures and displays application output: The Docker is used to connect and record the standard input, errors and output to make you view the processing of your application.
You have now run the Docker container! Starting from this point, you can interact with application and do container management. After completion, stop the Docker container.
Technology Underlying Dockers
Docker is written on move and it takes advantage of various Linux features for delivering its functions.
Docker makes use of a technology called ‘namespaces’ for providing containers or isolated workspaces as in terms of namespaces. While you are running a container, a set of namespaces are created for that Docker container.
Namespace is a popular concept in object oriented programming languages like C++, C#, etc. This is helpful in creating an isolated layer: each of the container content runs within its own namespace, avoiding interruption from outer space.
Few of the namespaces used by Docker are:
The pid namespace: It is being used for the isolation and process identity.
The uts namespace: Used in isolating kernel identifiers from version identifiers. UTS stand for Unix Timesharing System).
Net namespace: It helps to maintain network interfaces
The mnt namespace: Used for executing the mount points
The ipc namespace: Helpful for gaining right of entry to IPC resources
Another technology used by Docker is ‘cgroups’ otherwise called control groups. The major advantage of isolating applications is to make them use only their own set of resources. This assures multi-tenancy to Docker containers in the host system. Control group provides sharing of all available hardware resources of host to Docker containers and also in setting up constraints and limits on desired conditions. For example, you can easily limit the memory supplied to a particular container using “cgroups”.
Union File System
UnionFS or Union file system works by creating layers for Docker images and this ensures light-weight and high-speed nature to Docker technology. Docker makes use of Union file system as the building block. The union file system makes it possible for Docker to use UFS’s multiple file systems such as vfs, DeviceMapper, btrfs and AUFS.
Docker merges all these components into a container format called ‘wrapper’. ‘Libcontainer’ is the default container format for Docker. Docker enlarges its support to even basic Linux containers with the help of LXC. The future technology of Docker may support various different container formats through integration with Solaris Zones or