Docker is a tool which is used to containerize the applications for automated deployment. Containerization creates light weighted, isolated applications which run efficiently on any platform without any separate configuration.
To gain in-depth knowledge and with practical experience, then explore Docker Training
How does Docker work?
Docker is basically a program written to perform OS-level virtualization, where different isolated memory spaces, called containers, run software packages. Each container is derived from an image, which is a bundle of all the resources and configuration files needed by an application to run. There can be multiple isolated containers on a single operating system kernel.
These containers help in performing multiple applications to run in a virtual environment, creating a portable system that can independently work on any platform. Be it on premises, public cloud, private cloud, bare metal, or any other base.
Docker can be extensively used in distributed systems having multiple nodes to perform autonomous tasks concurrently, cutting down dependency on the physical systems. It can be used in scaling systems for applications like Apache Cassandra, MongoDB, and Riak. It can also be deployed in DevOps, mainly in stages of Continuous Deployment.
What are the Best Docker Deployment Tools?
In order to use Docker, it has to be installed on the host system. There are tools available in the market which provide the right environment to configure containerization on your host. Below are a couple of deployment tools which are important to be aware of, and pick the right one based on your requirement.
Tool #1 Kubernetes
Kubernetes is an open-source platform that supports the containerization workspace and services. It is a portable and extensible ecosystem, whose services, tools & supports are widely available.
More on Kubernetes
Kubernetes was open-sourced by Google in 2014. It can be considered as:
- a container platform
- a microservices platform
- a cloud platform
- and many more.
Apart from being a container-management platform, it supports networking, computing, and storage for user workloads. It is greatly perceived for Platform as a Service (PaaS) with the extensibility of Infrastructure as a Service (Iaas).
[Related Blog: Networking In Docker]
Kubernetes on Docker
Kubernetes is available on Docker Desktop for Mac 17.12 CE Edge and higher, and 18.06 Stable and higher. Kubernetes is a single node cluster, not configurable and its server runs locally within the Docker instance on your system.
In order to test your workloads, deploy them on Kubernetes and run the standalone server. Now, we help you to get acquainted with some basic workarounds on Kubernetes on your MAC with the following commands and instructions.
The Kubernetes client command is kubectl. It is used to configure the server locally. If you have already installed kubectl, make sure that it is pointing to docker-for-desktop. This can be verified and implemented using the following commands.
$ kubectl config get-contexts
$ kubectl config use-context docker-for-desktop
[Related Blog: Management of Complex Docker Containers]
If you have installed kubectl with Homebrew or other installers and have some issues to work with, remove it from /usr/local/bin/kubectl.
- To enable Kubernetes and install a standalone application of the same, running as a Docker container, click on Enable Kubernetes, select the default orchestrator and use the Apply button. An internet connection is required to download the images and containers, required to instantiate Kubernetes. Then /usr/local/bin/kubectl command will be installed on your MAC.
- Most users don't need to use the Kubernetes containers. By default, these containers are not displayed under the command docker service ls. However, you can enable it by selecting Show system containers (advanced) and click Apply and Restart.
- If you want to remove the Kubernetes support, deselect Enable Kubernetes. This will remove all the Kubernetes containers and commands including /usr/local/bin/kubectl command.
While working on Kubernetes, you can use the swarm mode for deploying some of the workloads. To enable this mode, override the default orchestrator for a given terminal session or single Docker command using the variable DOCKER_STACK_ORCHESTRATOR.
There are two ways of overriding:
- Overriding the orchestrator for a single deployment, by setting the variable at the start of the command itself.
DOCKER_STACK_ORCHESTRATOR=swarm docker stack deploy --compose-file /path/to/docker-compose.yml mystack
- While deploying to override the default orchestrator for that deployment, --orchestrator flag may be set to swarm or Kubernetes.
docker stack deploy --orchestrator swarm --compose-file /path/to/docker-compose.yml mystack.
- Overall, Kubernetes is essentially a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts".
Tool #2 Prometheus
Prometheus is installed to test the deployed application developed in containers. It collects the data from the host at intervals and evaluates it to generate alerts if necessary. It implements a high dimensional data model and has a built-in expression browser, Grafana Integration, and a console template language to provide best of the analysis support. It has powerful queries that allow time slicing of data to generate ad-hoc graphs, tables, and alerts.
How to install Prometheus on Docker?
Installing Prometheus on Docker is a process of simple steps. This can be achieved in either of the 2 ways.
- From Precompiled Binaries
Precompiled binaries are available on GitHub with latest releases. Use the latest binary release for installation of Prometheus for better stability and enhanced features.
- Building from Source
To build Prometheus from the source, you need to have a working Go environment with version 1.5 and above. You can directly use the Go environment to install prometheus and promtool binaries into GOPATH.
You can clone the repository and build using make file:
$ mkdir -p $GOPATH/src/github.com/prometheus $ cd $GOPATH/src/github.com/prometheus $ git clone https://github.com/prometheus/prometheus.git $ cd prometheus $ make build $ ./prometheus -config.file=your_config.yml The make file provides the following feature targets: build: it builds the prometheus and promtool binaries test: it runs the tests format: it formats the source code vet: it checks the source code for errors assets: it rebuilds the static assets docker: it builds a Docker header for the current head.
Now, the idea of treating time series data as a data source to generate alerts, is available to everyone as an open source through Prometheus.
Tool #3 Dockersh
Dockersh is used to provide a user shell for isolated, containerized environments. It is used as a login shell for machines with multiple user interactions.
A Dockersh when invoked, brings up a Docker container into the active state and spawn an interactive shell in the containers' namespace.
Dockersh can be used in two ways:
as a shell in /etc/passwd or as an ssh ForceCommand.
Dockersh is basically used in multiple user environments to provide user isolation in a single box. With Dockersh, the users enter their own Docker space (container), with their home directory mounted on the host machine, providing data retention between container restarts.
The users will be able to see only their processes with their own kernel namespaces for processing and networking. This provides the necessary user privacy and a better division of resources and constraints pertaining to each.
Generally, in order to provide user isolation through individual containers, you have to run a ssh Daemon in each container and have a separate port for each user to ssh or use ForceCommand hacks. This eliminates the need for such complex or spurious procedures and allows you to have a single ssh process, on a normal ssh port to achieve isolation.
Dockersh removes all the privileges, including disabling suid, sgid, raw sockets and mknod capabilities of the target process (and all sub-processes). This, however, doesn't ensure complete security against public access.
[Related Blog: Security in the Docker]
Requirements to install Dockersh
- OS: Linux version 3.8 and above.
- Docker version 1.2.0 and above.
- Go version 1.2.0 and above (if you want to build it locally).
Subscribe to our youtube channel to get new updates..!
Installation on Docker
Build the Dockerfile into an image in the local directory and run it using:
$ docker build # Progress, takes a while the first time.. .... Successfully built 3006a08eef2e $ docker run -v /usr/local/bin:/target 3006a08eef2e
This is the simplest and recommended way of installing Dockersh.
Tools #4 Twistlock
For comprehensive Docker security solutions for your Docker Enterprise or Docker community edition, Twistlock becomes the undeniable first choice. It provides security against advanced threats, vulnerability, and includes powerful runtime protection.
In general, Twistlock is programmed with machine learning to enable automated policy creation and enforcement, that provides full lifecycle, full stack, container security solutions.
Twistlock has some incredible features for providing seamless security to containers. Some of them being:
- Vulnerability Management: It collects the vulnerability information from images present in your registry to analyze and address them during the upstream running of the Docker applications.
- Compliance: With a record of over 200 built-in checks, in compliance with Docker CIS Benchmark, Twistlock monitors and automatically enforces compliance policies, throughout the container application lifecycle. Twistlock can be integrated with any CI/CD tools, any registry, and any platform.
- Runtime Defence: It performs automated security models to provide powerful runtime protect against illegal threats.
- Cloud Native Firewalls: To provide customized security to the cloud platforms, Twistlock comes with cloud-native firewalls. CNAF and CNNF are firewalls built for cloud-native environments. Twistlock intelligence technology in these firewalls protect the cloud environments from XSS attacks, SQL injection, and other modern threats.
[Related Blog: Orchestration in the Docker]
Tools #5 Kitematic
Kitematic is an open source project developed to simplify Docker installation on Mac and Windows system. It automates the installation process of Docker and provides an interactive Graphical User Interface (GUI) to select and run the Docker containers. Kitematic integrates with the Docker Machine to provide a Virtual Machine, on which Docker Engine will be installed locally.
On the GUI Home screen, you can see the curated Docker images, which can be chosen to run. You can also find the public Docker images of Kitematic from the Docker Hub. The user interface provides buttons for easy selection and running of the Docker containers. It also allows you to manage ports and configure volumes. The advanced features like changing the environment variables, stream logs and switch between Docker CLI and the GUI can also be performed in Kitematic.
[Related Blog: Docker Images and Containers]
There are 2 ways to install Kitematic
- Select Kitematic from the Docker Desktop for Mac or Docker Desktop for Windows menu to get started with the installation.
- Download Kitematics directly from GitHub repository.
Tool #6 Docker Compose
This is used to configure and run multi-container Docker applications. Compose works in all stages: production, staging, development, testing, as well as CI workflows.
Docker Compose works in a 3 step process:
- Define the app environment using the Dockerfile
- Use YML file to define all the services, so that it can be run isolated, anywhere.
- Run docker-compose up and Compose starts running
[Related Blog: Use of Docker in Various Applications]
Installing Docker Compose
You can install Docker compose on Windows, Mac, and Linux 64 bit.
Before installing Docker compose, make sure that you have Docker Engine locally or remotely connected, as Compose relies on Docker Engine greatly for its work.
To install on Mac
As Docker Desktop for Mac and Docker Toolbox already include Compose installed with other apps, you need not separately install Compose.
To install on Windows
Docker Desktop for Windows and Docker Toolbox already include Docker Compose in their package, so you need not explicitly install it. However, if you are running Docker daemon and client directly on
Microsoft Windows Server 2016, you will have to install Docker Compose separately. To do so, follow the steps below:
Run Powershell as an administrator. When prompted with if you want to allow this app to make changes to your device, click Yes.
In Powershell, run this command:
- Then run the next command to download Docker Compose, substituting $dockerComposeVersion with the specific version of Compose you want to use:
Invoke-WebRequest "https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFilesdockerdocker-compose.exe
- Run the exe file to install the Docker Compose.
To install the Docker Compose on Linux
You can install the Docker Compose binary on Linux using the following link: https://github.com/docker/compose/releases
This link provides the step by step instructions to download the binary and install it on your system.
Alternative Install Option
There are 2 other ways which can be adapted to install the Docker compose. They are:
- Install using pip
- Install using container
[Related Blog: Docker Commands]
Install Using pip
If you are using pip way to install, we recommend you to use virtualenv, because many OS has python packages that conflict with the docker Compose dependencies.
Use the following commands to proceed:
pip install docker-compose
If you are not using virtualenv, use:
sudo pip install docker-compose
Install as a Container
To install, the Docker Compose as a container, run the command below.
$ sudo curl -L --fail https://github.com/docker/compose/releases/download/1.23.2/run.sh -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
Make sure you replace the version number with your requirement.
[Related Blog: Docker Container Software And Architecture]
Tools #7 Flocker
Flocker is an open source project developed to manage the containerized data volume for your Docker applications. Flocker assists the Ops teams to run Containerized stateful services like the database in production, by utilizing tools for data migration.
Unlike Docker Data Volume which is restricted to a single server, the Flocker data volume can be used with any container, irrespective of where it is running. The Flocker Data Volume is called the dataset. Flockers can be used to manage both the Docker container and the associated data volume.
Flocker can be installed using one of the 2 ways:
You can use CloudFormation template to install Flocker on Ubuntu
Install it manually from the link given below:
This tutorial takes you through the systematic process of Flocker installation on Ubuntu, manually.
Tool #8 Powerstrip
Powerstrip is currently deprecated and is no longer supported by ClusterHQ. This was basically developed as a pluggable HTTP proxy for the Docker API which allows you to plug many Docker extension prototypes into the same Docker Daemon. Powerstrip works based on chained blocking webhooks to arbitrary Docker API calls.
Tools #9 Weave Net
Weave Net creates a virtual network that connects multiple Docker containers to hosts and helps in automatic discovery of the containers. It enables the portable microservice based container applications to run anywhere, independent of the platform.
The network created by weave net enables the containers across different daemons to connect as though they are in the same network, without having to externally configure the links and maps.
The Weave Net network created exposes the services created to the public without any hassles, and similarly is open to accept connections from other container applications, irrespective of their location.
Features of weave Net
- Hassle free configuration: Since the containers in the network are connected to each other with the standard port numbers, managing microservices is straight forward. Besides this, the containers can easily ping each other using a simple DNS query or container’s name. It absolutely removes the complex communication through the NAT configurations and port mapping.
- Service Discovery: Service discovery happens very quickly on nodes connected through Weave Net, as fast "micro DNS" is provided to each node for fast detection.
- Weave Net is Fast: Weave net offers fast connection, by choosing the quick path between 2 hosts, providing the least latency and better throughput, without intervention.
- Multicast Support: Weave net provides multicast-supporting, where data can be sent to one multicast node and automatically, it will be transferred across all the branches.
- Secure: Weave net traverses the firewall in your system without a need of TCP add-on. You can encrypt your traffic to gain access to even untrusted networks, to establish the connection to apps on hosts.
[Related Blog: Basic Terminologies of Docker]
Installing weave Net
Ensure that you have Linux (kernel 3.8 or later) and Docker (version 1.10.0 or later) installed on your system.
Use the following commands to install the weave net
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
[Related Blog: Understand the Basics of Docker]
Tools #10 Drone
Currently, this plugin is deprecated on Docker. Docker Drone is a plugin used to build and publish Docker images to a container registry.
You can install the Docker drone using the following command.
Build the Docker image from the command below:
docker build --rm=true -f docker/Dockerfile -t plugins/docker.
Tool #11 Logspout
Logspout is a log router used for Dockers. It runs inside the docker container. It attaches to all containers on a host and then is used to route to any container needed, depending on the scenario. It is stateless in nature. It is just used to route the logs to a place of need, rather than managing and maintaining the logs. For now, it performs only stdout and stderr.
Use the latest container from the releases using,
$ docker pull gliderlabs/logspout:latest
If you are specific about the version, go ahead and pull it using:
$ curl -s dl.gliderlabs.com/logspout/v2.tgz | docker load
Tools #12 Helios
Helios was basically developed to provide orchestration frameworks, as open-source software. However, since the advent of the Kubernetes, the features of Helios is not worked upon for update and hence, there are no advanced features available on the same. However, the support team is open to accept the bug-fixes, but not new implementations.
Helios is basically a platform to provide orchestration framework for deploying and managing containers across the servers. It provides both HTTP API and the CLI to interact with servers while running the containers. It maintains the logs with the time in the cluster, regarding the deploys, restarts and the new version releases.
[Related Blog: Introduction Of Docker]
Pre-requisites for Helios Installation
Helios can run on any platform. However, your system needs to have the following:
- Docker 1.0 or later
- Zookeeper 3.4.0 or later
use helios-solo to run the Helios-Master and agent.
Ensure you have Docker installed locally, before proceeding to install helios-solo.
You can check this by running the command:
docker info and check if you get a response. Now, use the following commands to install the helios # add the helios apt repository $ sudo apt-key adv --keyserver hkp://keys.gnupg.net:80 --recv-keys 6F75C6183FF5E93D $ echo "deb https://dl.bintray.com/spotify/deb trusty main" | sudo tee -a /etc/apt/sources.list.d/helios.list # install helios-solo on Debian/Ubuntu $ sudo apt-get update && sudo apt-get install helios-solo # install helios-solo on OS X $ brew tap spotify/public && brew install helios-solo Once the installation is completed, bring up the helios-solo cluster. Do this, by using: # launch a helios cluster in a Docker container $ helios-up # check if it worked and the solo agent is registered $ helios-solo hosts
You can now use the helios-solo as the local cluster of Helios.
With this, we come to an end of the chapter on Docker Deployment tools. This topic gives an overall understanding and basic installation procedure of a few, selected Docker tools. However, there is a lot more to each one of them and can be discovered more on personal interests.
List Of MindMajix Docker Courses: