Docker, mainly used for developing isolated development environment, is now a well known and widely used platform for creating, running and transporting scattered applications. Docker is now used for Continuous Integration, Production Deployments and Platform as a Service (PaaS). Continuous Integration is the process of automatically generating tests against any new code developed. This enhances speed of software development and makes the software better in terms of quality. Continuous Production is a process that ensures the application can perform real time production. Continuous Integration and Continuous Production lets developers create applications quickly and reliably.
Continuous Integration requires the following elements:
Jenkins: It is a server for open source continuous integration. Git can be used as a build SCM tool using Jenkins’ Git Source Code Management plugin.
Docker Containers: Lets developers’ port applications work along with all its dependencies. Docker containers also enable these applications to run through various machines.
Docker Files: These are codes inside the Docker platform used to customize new containers. Amazon EC2 Instances:
These are used for hosting a number of Docker containers simultaneously.
Amazon S3 Buckets: These are used for storing created objects.
Some general rules for setting up Continuous Integration:
Lifecycle of distributed applications are managed by Docker Hub. It uses cloud services to create and share containers and allow for automatic workflows. This becomes the Github for Docker images. Each time a Github is called, new code is created.
The Docker Hub is a central location to work with Docker and its associated components. Docker Hub provides the following services:
In order to use Docker Hub, you will first need to register and create an account.
Docker Hub repositories allow sharing of images with external Docker community. After building an image you can easily put it in the Docker Hub repository. The Docker Hub repositories can be starred which is an indication that you like the repository. You can leave comments and interact with Docker communities.
CircleCI is a CI platform and it fits Docker well. If you have a Dockerfile, CircleCI creates an image, begins a new container and start running tests on it. The process starts by signing up on your Github account and then building a new project. Then you need to add a configuration file named circle.yml to the main project folder. You need to install Docker Compose, build an image and run it. For running the web process, the following code is used:
docker-compose run –d –no-deps web
We don’t need to use the command ‘up’ hear as CircleCI has Redis running already. Before testing, some configurations on Docker Hub must be changed.
New build will be created with each push to Github. But this is not required. We need CircleCI is open to test against the main branch after passing. Docker Hub will be activated by build. The following updates must be made:
1. Go to Settings and choose Automated Build
2. Remove selection of the Active box that appears and save changes made.
3. Go to Settings again and select the Build Triggers option.
4. Switch on the status
5. Copy the particular commands
Include an environment variable to CircleCI. First go the Project Settings and click Environment variables. Name the new variable ‘DEPLOY’ and paste the curl command. Later, include the code below at the end of the file circle.yml.
– $ –
This code triggers a $DEPLOY variable after test passes on the main branch.
Now testing is done in the following way:
Codes for Deployment
After signing up, open a new Droplet. Select Applications, followed by selecting Docker Application. Enter the following codes to use SSH as the root user:
$ ssh – root @ < some ip address >
Now duplicate the repo and setup Docker Compose. You are ready to run your application:
But what about continuous delivery! Instead of having to SSH into the server and clone the new code, the process should be part of our workflow so that once a new build is generated on DockerHub, the code is updated on Digital Ocean automatically.
We should have a mechanism that ensures that after new build is triggered on Docker Hub, the code will be updated automatically. This lets us avoid the process of including SSH in the server and then duplicate the new code. Tutum is the key element here. It controls the grouping and production of Docker containers and images. To setup Tutum, first you need to sign up with Github and add a Node. Then link your account and create a file named tutum.yml.
Add a new Stack now, give a name and upload the tutum.yml file. Then click ‘Create and deploy’, build and run containers.
Over time, CI server becomes a cluster of different language setups, system libraries and unique applications. Jenkins can be used to avoid such situation.
There is a Dockerfile available at the root of the project that includes all the information needed to run the application. Jenkins creates an image from this Dockerfile. After the image is built, containers can run repeatedly without any interruption. A job needs to be defined next in order to run tests. For example, the job name is ‘test_file’. There are two test jobs. In one of them pull request is used to run the tests and in the other, tests run periodically. Tests running periodically allow developers to work even when the projects have no activity for a long time. Though the setup is different for the two, the process of running tests is same.
When Docker is used with Jerkins, previous containers and images began to stack up, occupying valuable disk space and other useful resources. So Shipyard can be set up on CI server which is a better option for managing containers. Jenkins Sidebar Plugin can be used to create a link with the Shipyard interface.
|Docker Kubernetes Training|
|OpenShift Administration Training|
Get Updates on Tech posts, Interview & Certification questions and training schedules