Docker is a tool that is used to containerize the applications for automated deployment. Containerization creates light-weighted, isolated applications which run efficiently on any platform without any separate configuration.
3.Dockersh
7.Flocker
10.Drone
11.Logspout
12.Helios
Docker is basically a program written to perform OS-level virtualization, where different isolated memory spaces, called containers, run software packages. Each container is derived from an image, which is a bundle of all the resources and configuration files needed by an application to run. There can be multiple isolated containers on a single operating system kernel.
These containers help in performing multiple applications to run in a virtual environment, creating a portable system that can independently work on any platform. Be it on-premises, public cloud, private cloud, bare metal, or any other base.
Docker can be extensively used in distributed systems having multiple nodes to perform autonomous tasks concurrently, cutting down dependency on the physical systems. It can be used in scaling systems for applications like Apache Cassandra, MongoDB, and Riak. It can also be deployed in DevOps, mainly in stages of Continuous Deployment.
[ Related Blog: How Docker Works ]
In order to use Docker, it has to be installed on the host system. There are tools available in the market which provide the right environment to configure containerization on your host. Below are a couple of deployment tools that are important to be aware of, and pick the right one based on your requirement.
To gain in-depth knowledge and practical experience, then explore Docker Online Training
Kubernetes is an open-source platform that supports the containerization workspace and services. It is a portable and extensible ecosystem, whose services, tools & supports are widely available.
Kubernetes was open-sourced by Google in 2014. It can be considered as:
Apart from being a container-management platform, it supports networking, computing, and storage for user workloads. It is greatly perceived for Platform as a Service (PaaS) with the extensibility of Infrastructure as a Service (Iaas).
[ Related Article: Networking In Docker ]
Kubernetes is available on Docker Desktop for Mac 17.12 CE Edge and higher, and 18.06 Stable and higher. Kubernetes is a single node cluster, not configurable and its server runs locally within the Docker instance on your system.
In order to test your workloads, deploy them on Kubernetes and run the standalone server. Now, we help you to get acquainted with some basic workarounds on Kubernetes on your MAC with the following commands and instructions.
The Kubernetes client command is kubectl. It is used to configure the server locally. If you have already installed kubectl, make sure that it is pointing to docker-for-desktop. This can be verified and implemented using the following commands.
Commands:
$ kubectl config get-contexts
$ kubectl config use-context docker-for-desktop
If you have installed kubectl with Homebrew or other installers and have some issues to work with, remove it from /usr/local/bin/kubectl.
While working on Kubernetes, you can use the swarm mode for deploying some of the workloads. To enable this mode, override the default orchestrator for a given terminal session or single Docker command using the variable DOCKER_STACK_ORCHESTRATOR.
There are two ways of overriding:
Command:
DOCKER_STACK_ORCHESTRATOR=swarm docker stack deploy --compose-file /path/to/docker-compose.yml mystack
Command:
docker stack deploy --orchestrator swarm --compose-file /path/to/docker-compose.yml mystack.
Prometheus is installed to test the deployed application developed in containers. It collects the data from the host at intervals and evaluates it to generate alerts if necessary. It implements a high-dimensional data model and has a built-in expression browser, Grafana Integration, and a console template language to provide the best of analysis support. It has powerful queries that allow time-slicing of data to generate ad-hoc graphs, tables, and alerts.
Installing Prometheus on Docker is a process of simple steps. This can be achieved in either of the 2 ways.
Precompiled binaries are available on GitHub with the latest releases. Use the latest binary release for the installation of Prometheus for better stability and enhanced features.
To build Prometheus from the source, you need to have a working Go environment with version 1.5 and above. You can directly use the Go environment to install Prometheus and protocol binaries into GOPATH.
[ Related Article: Docker Swarm Architecture and Components ]
$ mkdir -p $GOPATH/src/github.com/prometheus
$ cd $GOPATH/src/github.com/prometheus
$ git clone https://github.com/prometheus/prometheus.git
$ cd prometheus
$ make build
$ ./prometheus -config.file=your_config.yml
The make file provides the following feature targets:
build: it builds the prometheus and promtool binaries
test: it runs the tests
format: it formats the source code
vet: it checks the source code for errors
assets: it rebuilds the static assets
docker: it builds a Docker header for the current head.
Now, the idea of treating time-series data as a data source to generate alerts is available to everyone as an open-source through Prometheus.
Dockersh is used to provide a user shell for isolated, containerized environments. It is used as a login shell for machines with multiple user interactions.
A Dockersh when invoked, brings up a Docker container into the active state and spawns an interactive shell in the containers' namespace.
as a shell in /etc/passwd or as an ssh ForceCommand.
Dockersh is basically used in multiple user environments to provide user isolation in a single box. With Dockersh, the users enter their own Docker space (container), with their home directory mounted on the host machine, providing data retention between container restarts.
The users will be able to see only their processes with their own kernel namespaces for processing and networking. This provides the necessary user privacy and a better division of resources and constraints pertaining to each.
Generally, in order to provide user isolation through individual containers, you have to run an ssh Daemon in each container and have a separate port for each user to ssh or use ForceCommand hacks. This eliminates the need for such complex or spurious procedures and allows you to have a single ssh process, on a normal ssh port to achieve isolation.
Dockersh removes all the privileges, including disabling suid, sgid, raw sockets, and mknod capabilities of the target process (and all sub-processes). This, however, doesn't ensure complete security against public access.
Build the Dockerfile into an image in the local directory and run it using:
Commands:
$ docker build
# Progress, takes a while the first time..
....
Successfully built 3006a08eef2e
$ docker run -v /usr/local/bin:/target 3006a08eef2e
This is the simplest and recommended way of installing Dockersh.
------------Know more about Docker Related Blogs------------
For comprehensive Docker security solutions for your Docker Enterprise or Docker community edition, Twistlock becomes the undeniable first choice. It provides security against advanced threats, vulnerability, and includes powerful runtime protection.
In general, Twistlock is programmed with machine learning to enable automated policy creation and enforcement, that provides full lifecycle, full-stack, container security solutions.
Twistlock has some incredible features for providing seamless security to containers. Some of them being:
Kitematic is an open-source project developed to simplify Docker installation on Mac and Windows systems. It automates the installation process of Docker and provides an interactive Graphical User Interface (GUI) to select and run the Docker containers. Kitematic integrates with the Docker Machine to provide a Virtual Machine, on which Docker Engine will be installed locally.
On the GUI Home screen, you can see the curated Docker images, which can be chosen to run. You can also find the public Docker images of Kitematic from the Docker Hub. The user interface provides buttons for easy selection and running of the Docker containers. It also allows you to manage ports and configure volumes. Advanced features like changing the environment variables, stream logs, and switch between Docker CLI and the GUI can also be performed in Kitematic.
[ Related Blog: Learn Docker Images ]
There are 2 ways to install Kitematic
Link: https://github.com/docker/kitematic/releases/
[ Related Article: Docker Interview Questions and Answers ]
This is used to configure and run multi-container Docker applications. Compose works in all stages: production, staging, development, testing, as well as CI workflows.
Docker Compose works in a 3 step process:
You can install Docker compose on Windows, Mac, and Linux 64 bit.
Before installing Docker compose, make sure that you have Docker Engine locally or remotely connected, as Compose relies on Docker Engine greatly for its work.
As Docker Desktop for Mac and Docker Toolbox already include Compose installed with other apps, you need not separately install Compose.
Docker Desktop for Windows and Docker Toolbox already include Docker Compose in their package, so you need not explicitly install it. However, if you are running Docker daemon and client directly on
Microsoft Windows Server 2016, you will have to install Docker Compose separately. To do so, follow the steps below:
Run Powershell as an administrator. When prompted with if you want to allow this app to make changes to your device, click Yes.
Command:
[Net.ServicePointManager]::SecurityProtocol=[Net.SecurityProtocolType]::Tls12
Command:
Invoke-WebRequest "https://github.com/docker/compose/releases/download/$dockerComposeVersion/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFilesdockerdocker-compose.exe
You can install the Docker Compose binary on Linux using the following link: https://github.com/docker/compose/releases
This link provides step-by-step instructions to download the binary and install it on your system.
Alternative Install Option
There are 2 other ways that can be adapted to install the Docker compose. They are:
[ Related Blog: Docker Commands ]
If you are using pip way to install, we recommend you to use virtualenv, because many OS has python packages that conflict with the docker Compose dependencies.
Use the following commands to proceed:
Command:
pip install docker-compose
If you are not using virtualenv, use:
Command:
sudo pip install docker-compose
To install, the Docker Compose as a container, run the command below.
Command:
$ sudo curl -L --fail https://github.com/docker/compose/releases/download/1.23.2/run.sh -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
Make sure you replace the version number with your requirement.
[ Related Blog: Docker Container Architecture ]
Flocker is an open-source project developed to manage the containerized data volume for your Docker applications. Flocker assists the Ops teams to run Containerized stateful services like the database in production, by utilizing tools for data migration.
Unlike Docker Data Volume which is restricted to a single server, the Flocker data volume can be used with any container, irrespective of where it is running. The Flocker Data Volume is called the dataset. Flockers can be used to manage both the Docker container and the associated data volume.
Flocker can be installed using one of the 2 ways:
You can use the CloudFormation template to install Flocker on Ubuntu
Install it manually from the link given below:
https://flocker.readthedocs.io/en/latest/docker-integration/manual-install.html
This tutorial takes you through the systematic process of Flocker installation on Ubuntu, manually.
Powerstrip is currently deprecated and is no longer supported by ClusterHQ. This was basically developed as a pluggable HTTP proxy for the Docker API which allows you to plug many Docker extension prototypes into the same Docker Daemon. Powerstrip works based on chained blocking webhooks to arbitrary Docker API calls.
Weave Net creates a virtual network that connects multiple Docker containers to hosts and helps in the automatic discovery of the containers. It enables portable microservice-based container applications to run anywhere, independent of the platform.
The network created by weave net enables the containers across different daemons to connect as though they are in the same network, without having to externally configure the links and maps.
The Weave Net network created exposes the services created to the public without any hassles, and similarly is open to accept connections from other container applications, irrespective of their location.
Ensure that you have Linux (kernel 3.8 or later) and Docker (version 1.10.0 or later) installed on your system.
Use the following commands to install the weaving net
Commands:
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
Currently, this plugin is deprecated on Docker. Docker Drone is a plugin used to build and publish Docker images to a container registry.
You can install the Docker drone using the following command.
Command:
sh .drone.sh
Build the Docker image from the command below:
Command:
docker build --rm=true -f docker/Dockerfile -t plugins/docker.
Logspout is a log router used for Dockers. It runs inside the docker container. It attaches to all containers on a host and then is used to route to any container needed, depending on the scenario. It is stateless in nature. It is just used to route the logs to a place of need, rather than managing and maintaining the logs. For now, it performs only stdout and stderr.
Use the latest container from the releases using,
Command:
$ docker pull gliderlabs/logspout:latest
If you are specific about the version, go ahead and pull it using:
Command:
$ curl -s dl.gliderlabs.com/logspout/v2.tgz | docker load
Helios was basically developed to provide orchestration frameworks, as open-source software. However, since the advent of the Kubernetes, the features of Helios is not worked upon for update and hence, there are no advanced features available on the same. However, the support team is open to accept bug fixes, but not new implementations.
Helios is basically a platform to provide an orchestration framework for deploying and managing containers across the servers. It provides both HTTP API and the CLI to interact with servers while running the containers. It maintains the logs with the time in the cluster, regarding the deploys, restarts, and the new version releases.
Helios can run on any platform. However, your system needs to have the following:
use helios-solo to run the Helios-Master and agent.
Ensure you have Docker installed locally, before proceeding to install helios-solo.
Command:
docker info
and check if you get a response.
Now, use the following commands to install the helios
# add the helios apt repository
$ sudo apt-key adv --keyserver hkp://keys.gnupg.net:80 --recv-keys 6F75C6183FF5E93D
$ echo "deb https://dl.bintray.com/spotify/deb trusty main" | sudo tee -a /etc/apt/sources.list.d/helios.list
# install helios-solo on Debian/Ubuntu
$ sudo apt-get update && sudo apt-get install helios-solo
# install helios-solo on OS X
$ brew tap spotify/public && brew install helios-solo
Once the installation is completed, bring up the helios-solo cluster. Do this, by using:
# launch a helios cluster in a Docker container
$ helios-up
# check if it worked and the solo agent is registered
$ helios-solo hosts
You can now use the helios-solo as the local cluster of Helios.
With this, we come to the end of the chapter on Docker Deployment tools. This topic gives an overall understanding and basic installation procedure of a few, selected Docker tools. However, there is a lot more to each one of them and can be discovered more on personal interests.
Explore Docker Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download Now!
List Of MindMajix Docker Courses:
Kubernetes Administration |
OpenShift |
Docker Kubernetes |
OpenShift Administration |
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
Docker Training | Nov 23 to Dec 08 | View Details |
Docker Training | Nov 26 to Dec 11 | View Details |
Docker Training | Nov 30 to Dec 15 | View Details |
Docker Training | Dec 03 to Dec 18 | View Details |
Vinod M is a Big data expert writer at Mindmajix and contributes in-depth articles on various Big Data Technologies. He also has experience in writing for Docker, Hadoop, Microservices, Commvault, and few BI tools. You can be in touch with him via LinkedIn and Twitter.