Introduction Of Docker
In this article we will try to introduce the concepts of Docker and where this software does fit in a Production like environment. We will start with the basic aspects of Docker container service and then focus mainly on the architecture of Docker.
Docker, an open source technology, is used primarily for developing / shipping and running applications. Docker enables you to segregate the applications from your underlying infrastructure, so that the software delivery is quicker than ever. Management of infrastructure is made simple and easy via Docker, as it can be managed just the same way how we can manage our own applications. Bringing in the advantages of Docker for shipping, testing, deploying the code – we can significantly reduce the delay between the development stages to hosting the same code on Production.
Docker provides the flexibility of packaging and running the application in a loosely coupled environment that surpasses the hardware related boundaries called as a Container. The levels of isolation and also th Docker security makes it possible to run many containers at the same time on a host which is Docker enabled. Containers can be thought of as the Virtual Computers but with the only difference of reduced OS. In other words, Containers are light weighted as they do not really need to run on a hypervisor but can run directly on the Host machine’s Kernel. This also means that there can be more containers on a given Docker enabled host than if it were to be using Virtual machines. Irony here is that you can run Docker containers within hosts which are virtual machines itself!
Docker provides the required tooling and also a platform to manage the lifecycle of your Docker containers:
1. We can develop applications and their supporting components using containers.
2. With the above, container becomes the basic unit for distribution and also in testing application.
3. When the application is all ready, we can deploy your application into production environment. This can either be done as a container or as an orchestrated service. This works the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two.
The Docker Engine is a client-server application with the following major components running as the internals of it and they are:
1. A server that is long-running which is called as a daemon process (the dockerd command).
2. A REST API which specifies interfaces that programs can use to communicate with the daemon and also be able to instruct it to what to do next.
3. A command line interface (CLI) client (the docker command).
The CLI uses the Docker provided REST APIs to control or interact with the Docker daemon through scripting or even the direct Docker CLI commands. Many of the other applications also use the underlying APIs and also uses the CLI interface as well. The daemon manages the part of creating and managing the Docker objects as like the Images, Containers, Networks and also the volumes.
Docker can be used for more than what we can actually think of, but to limit to the very useful scenarios and the most used scenarios – let us take the very important ones to take a closer look into it.
1. Fast, consistent delivery of your applications:
Docker enables and also streamlines the development lifecycle into a disciplined environment where developers are allowed to work in these standardized environments with the use of local containers that provide the applications and services. Docker Containers form the biggest usage for the Continuous Integration and Continuous Development workflows (CI / CD).
2. Consider the following example scenario:
A. Developers can concentrate on coding the modules locally and then it can be shared over to their colleagues using the Docker containers.
B. The developers can put Docker to use in pushing their applications into a test like environment and also execute automated and manual regression tests.
C. Developers if they find bugs, they can put in their efforts in resolving them and then push the fix into the same local environment and then test the fix for correct working as well.
D. Once the testing phase is completed, delivering the fix to the Customer is nothing difficult as it is just going to be pushing the fix into the Production environment from the Test environment.
3. Responsive deployment and scaling:
Docker with its container based platform makes it very easy and also allows highly portable workloads. Docker containers have the flexibility running on a developer’s laptop, a physical machine, a virtual machine, a virtual machine in a data center, on cloud providers, on premise providers, or an amalgam of all the mentioned environments until now. Dynamically managing the workloads is very easy with the Docker’s portability and light weighted nature. It also makes it every easy to scale up or to tear down applications and services, as and how the business dictates it to.
4. Running more workloads on the same hardware:
As discussed in the above sections that Docker is lightweight, along with it, it is lightning fast as well. It provides viable, cost-effective alternative to its counterparts as like the hypervisor based virtual machines. This enables than you can consume more on these resources and at the same time achieve the business goals as well. It is very much recommended for high density environments and also for the small / medium deployments where there is always more to be done with fewer resources.
Docker as any other counterpart in this arena, has a client-server architecture. Docker Daemon which forms the server component can be held responsible for any of the actions that relates with containers. The Docker daemon receives these commands from either the Docker client via the Command Line Interface (CLI) or through the Docker REST APIs. Having said that the Docker client can reside on the same host as that of Docker Daemon or it may be available on a total different machine altogether.
Images form the basic building blocks in the context of Docker and Containers are built from these images. We can understand Images to be templates with the required configurations of applications and then containers are just copies of these images. Images are always maintained and organized in layered manner. Each and every change in an image is added as a layer on top of it.
With the basic understanding of what Docker Images and Docker Containers are, let us now try to understand what Docker registries are all about. Docker registry can be understood as a repository of all Docker images. Using this super cool feature named Docker registry, we can build and share images amongst peers and colleagues of your team. Docker registries can be either Public or Private as well. If you Docker registry is public, then it means that all your images can be accessible by the Docker hub users. Docker registry which are Private in nature are nothing less than GIT, we can build images locally and commit it them to push it to Docker Hub.
A Docker Container can be understood as the actual execution environment for Docker as such. As explained earlier, Docker containers are created out of Docker images. You can configure all the required applications with the required configuration in a container and commit it to make a golden image out of it. We can then build more containers from it as we like. Two or more containers may be linked together to form a tiered application architecture, fulfilling business needs and requirements. Docker containers can be started or stopped, committed or terminated altogether – but a point to remember here is that if a docker container is terminated without being committed, all the changes made to that specific Docker container will be lost forever.
In this article, we have introduced the concepts of Docker and where this wonderful application finds its usage. Docker provides containerization of services into secluded individual virtual machines without really worrying about the OS and networking resources. We have also understood the usage, architecture and the design of Docker. Hope this article was in detail for you to understand the concepts pretty well.
Get Updates on Tech posts, Interview & Certification questions and training schedules