Docker is a tool which is used to containerize the applications for automated deployment. Containerization creates light weighted, isolated applications which run efficiently on any platform without any separate configuration.
Docker is basically a program written to perform OS-level virtualization, where different isolated memory spaces, called containers, run software packages. Each container is derived from an image, which is a bundle of all the resources and configuration files needed by an application to run. There can be multiple isolated containers on a single operating system kernel.
These containers help in performing multiple applications to run in a virtual environment, creating a portable system that can independently work on any platform. Be it on premises, public cloud, private cloud, bare metal, or any other base.
Docker can be extensively used in distributed systems having multiple nodes to perform autonomous tasks concurrently, cutting down dependency on the physical systems. It can be used in scaling systems for applications like Apache Cassandra, MongoDB, and Riak. It can also be deployed in DevOps, mainly in stages of Continuous Deployment.
In order to use Docker, it has to be installed on the host system. There are tools available in the market which provide the right environment to configure containerization on your host. Below are a couple of deployment tools which are important to be aware of, and pick the right one based on your requirement.
Kubernetes is an open-source platform that supports the containerization workspace and services. It is a portable and extensible ecosystem, whose services, tools & supports are widely available.
More on Kubernetes
Kubernetes was open-sourced by Google in 2014. It can be considered as:
Apart from being a container-management platform, it supports networking, computing, and storage for user workloads. It is greatly perceived for Platform as a Service (PaaS) with the extensibility of Infrastructure as a Service (Iaas).
[Related Blog: Networking In Docker]
Kubernetes on Docker
Kubernetes is available on Docker Desktop for Mac 17.12 CE Edge and higher, and 18.06 Stable and higher. Kubernetes is a single node cluster, not configurable and its server runs locally within the Docker instance on your system.
In order to test your workloads, deploy them on Kubernetes and run the standalone server. Now, we help you to get acquainted with some basic workarounds on Kubernetes on your MAC with the following commands and instructions.
The Kubernetes client command is kubectl. It is used to configure the server locally. If you have already installed kubectl, make sure that it is pointing to docker-for-desktop. This can be verified and implemented using the following commands.
$ kubectl config get-contexts
$ kubectl config use-context docker-for-desktop
[Related Blog: Management of Complex Docker Containers]
If you have installed kubectl with Homebrew or other installers and have some issues to work with, remove it from /usr/local/bin/kubectl.
While working on Kubernetes, you can use the swarm mode for deploying some of the workloads. To enable this mode, override the default orchestrator for a given terminal session or single Docker command using the variable DOCKER_STACK_ORCHESTRATOR.
There are two ways of overriding:
DOCKER_STACK_ORCHESTRATOR=swarm docker stack deploy --compose-file /path/to/docker-compose.yml mystack
docker stack deploy --orchestrator swarm --compose-file /path/to/docker-compose.yml mystack.
Overall, Kubernetes is essentially a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts".
Prometheus is installed to test the deployed application developed in containers. It collects the data from the host at intervals and evaluates it to generate alerts if necessary. It implements a high dimensional data model and has a built-in expression browser, Grafana Integration, and a console template language to provide best of the analysis support. It has powerful queries that allow time slicing of data to generate ad-hoc graphs, tables, and alerts.
How to install Prometheus on Docker ?
Installing Prometheus on Docker is a process of simple steps. This can be achieved in either of the 2 ways.
Precompiled binaries are available on GitHub with latest releases. Use the latest binary release for installation of Prometheus for better stability and enhanced features.
To build Prometheus from the source, you need to have a working Go environment with version 1.5 and above. You can directly use the Go environment to install prometheus and promtool binaries into GOPATH.
You can clone the repository and build using make file:
$ mkdir -p $GOPATH/src/github.com/prometheus $ cd $GOPATH/src/github.com/prometheus $ git clone https://github.com/prometheus/prometheus.git $ cd prometheus $ make build $ ./prometheus -config.file=your_config.yml The make file provides the following feature targets: build: it builds the prometheus and promtool binaries test: it runs the tests format: it formats the source code vet: it checks the source code for errors assets: it rebuilds the static assets docker: it builds a Docker header for the current head.
Now, the idea of treating time series data as a data source to generate alerts, is available to everyone as an open source through Prometheus.
Dockersh is used to provide a user shell for isolated, containerized environments. It is used as a login shell for machines with multiple user interactions.
A Dockersh when invoked, brings up a Docker container into the active state and spawn an interactive shell in the containers' namespace.
Dockersh can be used in two ways:
as a shell in /etc/passwd or as an ssh ForceCommand.
Dockersh is basically used in multiple user environments to provide user isolation in a single box. With Dockersh, the users enter their own Docker space (container), with their home directory mounted on the host machine, providing data retention between container restarts.
The users will be able to see only their processes with their own kernel namespaces for processing and networking. This provides the necessary user privacy and a better division of resources and constraints pertaining to each.
Generally, in order to provide user isolation through individual containers, you have to run a ssh Daemon in each container and have a separate port for each user to ssh or use ForceCommand hacks. This eliminates the need for such complex or spurious procedures and allows you to have a single ssh process, on a normal ssh port to achieve isolation.
Dockersh removes all the privileges, including disabling suid, sgid, raw sockets and mknod capabilities of the target process (and all sub-processes). This, however, doesn't ensure complete security against public access.
[Related Blog: Security in the Docker]
Requirements to install Dockersh
Installation on Docker
Build the Dockerfile into an image in the local directory and run it using:
$ docker build # Progress, takes a while the first time.. .... Successfully built 3006a08eef2e $ docker run -v /usr/local/bin:/target 3006a08eef2e
This is the simplest and recommended way of installing Dockersh.
For comprehensive Docker security solutions for your Docker Enterprise or Docker community edition, Twistlock becomes the undeniable first choice. It provides security against advanced threats, vulnerability, and includes powerful runtime protection.
In general, Twistlock is programmed with machine learning to enable automated policy creation and enforcement, that provides full lifecycle, full stack, container security solutions.
Twistlock has some incredible features for providing seamless security to containers. Some of them being:
[Related Blog: Orchestration in the Docker]
Kitematic is an open source project developed to simplify Docker installation on Mac and Windows system. It automates the installation process of Docker and provides an interactive Graphical User Interface (GUI) to select and run the Docker containers. Kitematic integrates with the Docker Machine to provide a Virtual Machine, on which Docker Engine will be installed locally.
On the GUI Home screen, you can see the curated Docker images, which can be chosen to run. You can also find the public Docker images of Kitematic from the Docker Hub. The user interface provides buttons for easy selection and running of the Docker containers. It also allows you to manage ports and configure volumes. The advanced features like changing the environment variables, stream logs and switch between Docker CLI and the GUI can also be performed in Kitematic.
[Related Blog: Docker Images and Containers]
There are 2 ways to install Kitematic
This is used to configure and run multi-container Docker applications. Compose works in all stages: production, staging, development, testing, as well as CI workflows.
Docker Compose works in a 3 step process:
[Related Blog: Use of Docker in Various Applications]
Installing Docker Compose
You can install Docker compose on Windows, Mac, and Linux 64 bit.
Before installing Docker compose, make sure that you have Docker Engine locally or remotely connected, as Compose relies on Docker Engine greatly for its work.
To install on Mac
As Docker Desktop for Mac and Docker Toolbox already include Compose installed with other apps, you need not separately install Compose.
To install on Windows
Docker Desktop for Windows and Docker Toolbox already include Docker Compose in their package, so you need not explicitly install it. However, if you are running Docker daemon and client directly on
Microsoft Windows Server 2016, you will have to install Docker Compose separately. To do so, follow the steps below:
Run Powershell as an administrator. When prompted with if you want to allow this app to make changes to your device, click Yes.
In Powershell, run this command:
Command: [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Then run the next command to download Docker Compose, substituting $dockerComposeVersion with the specific version of Compose you want to use:
Command: Invoke-WebRequest "https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFilesdockerdocker-compose.exe
Run the exe file to install the Docker Compose.
To install the Docker Compose on Linux
You can install the Docker Compose binary on Linux using the following link: https://github.com/docker/compose/releases
This link provides the step by step instructions to download the binary and install it on your system.
Alternative Install Option
There are 2 other ways which can be adapted to install the Docker compose. They are:
[Related Blog: Docker Commands]
Install Using pip
If you are using pip way to install, we recommend you to use virtualenv, because many OS has python packages that conflict with the docker Compose dependencies.
Use the following commands to proceed:
pip install docker-compose
If you are not using virtualenv, use:
sudo pip install docker-compose
Install as a Container
To install, the Docker Compose as a container, run the command below.
$ sudo curl -L --fail https://github.com/docker/compose/releases/download/1.23.2/run.sh -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
Make sure you replace the version number with your requirement.
[Related Blog: Docker Container Software And Architecture]
Flocker is an open source project developed to manage the containerized data volume for your Docker applications. Flocker assists the Ops teams to run Containerized stateful services like the database in production, by utilizing tools for data migration.
Unlike Docker Data Volume which is restricted to a single server, the Flocker data volume can be used with any container, irrespective of where it is running. The Flocker Data Volume is called the dataset. Flockers can be used to manage both the Docker container and the associated data volume.
Flocker can be installed using one of the 2 ways:
You can use CloudFormation template to install Flocker on Ubuntu
Install it manually from the link given below:
This tutorial takes you through the systematic process of Flocker installation on Ubuntu, manually.
Powerstrip is currently deprecated and is no longer supported by ClusterHQ. This was basically developed as a pluggable HTTP proxy for the Docker API which allows you to plug many Docker extension prototypes into the same Docker Daemon. Powerstrip works based on chained blocking webhooks to arbitrary Docker API calls.
Weave Net creates a virtual network that connects multiple Docker containers to hosts and helps in automatic discovery of the containers. It enables the portable microservice based container applications to run anywhere, independent of the platform.
The network created by weave net enables the containers across different daemons to connect as though they are in the same network, without having to externally configure the links and maps.
The Weave Net network created exposes the services created to the public without any hassles, and similarly is open to accept connections from other container applications, irrespective of their location.
Features of weave Net
[Related Blog: Basic Terminologies of Docker]
Installing weave Net
Ensure that you have Linux (kernel 3.8 or later) and Docker (version 1.10.0 or later) installed on your system.
Use the following commands to install the weave net
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
Currently, this plugin is deprecated on Docker. Docker Drone is a plugin used to build and publish Docker images to a container registry.
You can install the Docker drone using the following command.
Build the Docker image from the command below:
Command: docker build --rm=true -f docker/Dockerfile -t plugins/docker.
Logspout is a log router used for Dockers. It runs inside the docker container. It attaches to all containers on a host and then is used to route to any container needed, depending on the scenario. It is stateless in nature. It is just used to route the logs to a place of need, rather than managing and maintaining the logs. For now, it performs only stdout and stderr.
Use the latest container from the releases using,
$ docker pull gliderlabs/logspout:latest
If you are specific about the version, go ahead and pull it using:
$ curl -s dl.gliderlabs.com/logspout/v2.tgz | docker load
Helios was basically developed to provide orchestration frameworks, as open-source software. However, since the advent of the Kubernetes, the features of Helios is not worked upon for update and hence, there are no advanced features available on the same. However, the support team is open to accept the bug-fixes, but not new implementations.
Helios is basically a platform to provide orchestration framework for deploying and managing containers across the servers. It provides both HTTP API and the CLI to interact with servers while running the containers. It maintains the logs with the time in the cluster, regarding the deploys, restarts and the new version releases.
[Related Blog: Introduction Of Docker]
Pre-requisites for Helios Installation
Helios can run on any platform. However, your system needs to have the following:
use helios-solo to run the Helios-Master and agent.
Ensure you have Docker installed locally, before proceeding to install helios-solo.
You can check this by running the command:
docker info and check if you get a response. Now, use the following commands to install the helios # add the helios apt repository $ sudo apt-key adv --keyserver hkp://keys.gnupg.net:80 --recv-keys 6F75C6183FF5E93D $ echo "deb https://dl.bintray.com/spotify/deb trusty main" | sudo tee -a /etc/apt/sources.list.d/helios.list # install helios-solo on Debian/Ubuntu $ sudo apt-get update && sudo apt-get install helios-solo # install helios-solo on OS X $ brew tap spotify/public && brew install helios-solo Once the installation is completed, bring up the helios-solo cluster. Do this, by using: # launch a helios cluster in a Docker container $ helios-up # check if it worked and the solo agent is registered $ helios-solo hosts
You can now use the helios-solo as the local cluster of Helios.
With this, we come to an end of the chapter on Docker Deployment tools. This topic gives an overall understanding and basic installation procedure of a few, selected Docker tools. However, there is a lot more to each one of them and can be discovered more on personal interests.
Free Demo for Corporate & Online Trainings.