Application Platforms and Continuous Integration


Docker, mainly used for developing isolated development environment, is now a wellknown and widely used platform for creating, running and transporting scattered applications. Docker is now used for Continuous Integration, Production Deployments and Platform as a Service (PaaS). Continuous Integration is the process of automatically generating tests against any new code developed. This enhances speed of software development and makes the software better in terms of quality. Continuous Production is a process that ensures the application can perform real time production. Continuous Integration and Continuous Production lets developers create applications quickly and reliably.

Elements of Continuous Integration

Continuous Integration requires the following elements:

Jenkins: It is a server for open source continuous integration. Git can be used as a build SCM tool using Jenkins’ Git Source Code Management plugin.

Docker Containers: Lets developers’ port applications work along with all its dependencies. Docker containers also enable these applications to run through various machines.

Docker Files: These are codes inside the Docker platform used to customize new containers. Amazon EC2 Instances:

These are used for hosting a number of Docker containers simultaneously.

Amazon S3 Buckets: These are used for storing created objects.

Components of Docker Architecture

  • File System: Only the container’s sandbox file is accessible to containers.
  • User Namespace: Separate database is available for a container. This means that the root of the container is different than root account of the host.
  • Process Namespace: Processes inside a container are shielded from those in other containers or in the host machine.
  • Network Namespace: A separate IP address is given to a container.

Some general rules for setting up Continuous Integration:

  • Feature branches include long running features
  • A pull request is initiated when features are completed.
  • Pull request merges the develop branch with the master branch in time for implementation.
  • Repeated implementation can occur throughout the day.

Docker Hub

Lifecycle of distributed applications are managed by Docker Hub. It uses cloud services to create and share containers and allow for automatic workflows. This becomes the Github for Docker images. Each time a Github is called, new code is created.

The Docker Hub is a central location to work with Docker and its associated components. Docker Hub provides the following services:

  • Hosting of Docker image
  • User authentication.
  • Various work flow tools and automatic image builds
  • Integration with BitBucket and GitHub

In order to use Docker Hub, you will first need to register and create an account.

Docker Hub Repositories

Docker Hub repositories allow sharing of images with external Docker community. After building an image you can easily put it in the Docker Hub repository. The Docker Hub repositories can be starred which is an indication that you like the repository. You can leave comments and interact with Docker communities.

Docker and CircleCI for Continuous Integration

CircleCI is a CI platform and it fits Docker well. If you have a Dockerfile, CircleCI creates an image, begins a new container and start running tests on it. The process starts by signing up on your Github account and then building a new project. Then you need to add a configuration file named circle.yml to the main project folder. You need to install Docker Compose, build an image and run it. For running the web process, the following code is used:

docker-compose run –d –no-deps web

We don’t need to use the command ‘up’ hear as CircleCI has Redis running already. Before testing, some configurations on Docker Hub must be changed.

Docker Hub (redux)

New build will be created with each push to Github. But this is not required. We need CircleCI is open to test against the main branch after passing. Docker Hub will be activated by build. The following updates must be made:

1. Go to Settings and choose Automated Build
2. Remove selection of the Active box that appears and save changes made.
3. Go to Settings again and select the Build Triggers option.
4. Switch on the status
5. Copy the particular commands

CircleCI (Redux)

Include an environment variable to CircleCI. First go the Project Settings and click Environment variables. Name the new variable ‘DEPLOY’ and paste the curl command. Later, include the code below at the end of the file circle.yml.

Deployment :

commands :
– $ –

This code triggers a $DEPLOY variable after test passes on the main branch.

Now testing is done in the following way:

  • A new branch is made
  • Changes, if needed, are made locally
  • Pull request is issued
  • After the tests pass, merging is done manually
  • A new build is activated on Docker Hub after the second test passes

Codes for Deployment
After signing up, open a new Droplet. Select Applications, followed by selecting Docker Application. Enter the following codes to use SSH as the root user:
$ ssh – root @ < some ip address >

Now duplicate the repo and setup Docker Compose. You are ready to run your application:

But what about continuous delivery! Instead of having to SSH into the server and clone the new code, the process should be part of our workflow so that once a new build is generated on DockerHub, the code is updated on Digital Ocean automatically.

Continuous Delivery

We should have a mechanism that ensures that after new build is triggered on Docker Hub, the code will be updated automatically. This lets us avoid the process of including SSH in the server and then duplicate the new code. Tutum is the key element here. It controls the grouping and production of Docker containers and images. To setup Tutum,
first you need to sign up with Github and add a Node. Then link your account and create a file named tutum.yml.

Add a new Stack now, give a name and upload the tutum.yml file. Then click ‘Create and deploy’, build and run containers.

Docker and Jenkins for Continuous Integration

Over time, CI server becomes a cluster of different language setups, system libraries and unique applications. Jenkins can be used to avoid such situation.

Setting up Jenkins

There is a Dockerfile available at the root of the project that includes all the information needed to run the application. Jenkins creates an image from this Dockerfile. After the image is built, containers can run repeatedly without any interruption. A job needs to be defined next in order to run tests. For example, the job name is ‘test_file’. There are two test jobs. In one of them pull request is used to run the tests and in the other, tests run periodically. Tests running periodically allow developers to work even when the projects have no activity for a long time. Though the setup is different for the two, the process of running tests is same.


When Docker is used with Jerkins, previous containers and images began to stack up, occupying valuable disk space and other useful resources. So Shipyard can be set up on CI server which is a better option for managing containers. Jernkins Sidebar Plugin can be used to create a link with the Shipyard interface.

0 Responses on Application Platforms and Continuous Integration"

Leave a Message

Your email address will not be published. Required fields are marked *

Copy Rights Reserved © Mindmajix.com All rights reserved. Disclaimer.