GitHub+Jenkins+Docker integration (Advanced)
This project is based on continuous development, continuous integration and continuous deployment. Using DevOps tools, a completely automated setup has been made in which the developer only needs to commit the source code and the changes will get displayed on the docker file of the extension of the file.
A step-by step process for the project is described below:
- First let us create a Dockerfile for creating a container in which we are creating an image with centos (from dockerhub) and other resources as per our requirements.
- Now we have to build this Dockerfile.
- Now we have to run a container from this Docker image which we have created. Here we are using --privileged so that we can switch to the base OS and run other containers (docker) according to our requirements. Also we are using patting and mounting.
- In the above screenshot you can see I have run a cat command, this is because when you run the container it automatically install jenkins and for jenkins to perform its jobs , it asks a password which is in this file.
- Now after setting up jenkins, install plugins such as build pipeline, git and github. After that we are all set to write our jobs.
- JOB1: This job is used to copy all the files from the github repository to a specific folder in the container. Here I have used Poll SCM which is checking the repository every second. You can use the GitHub webhook triggers and for this you have to do tunneling.
2. JOB2: This job is running as soon as Job1 is successful. This job will deploy a httpd container as we are checking if the extension of the file is html. Similarly, we can add more checks and hence can deploy the container accordingly.
3. JOB3: This job is running as soon as Job2 is successful. This job is checking the status code of the httpd server. It it fails then immediately another job is triggered which will sent a mail to the developer.
4. JOB4: This job runs immediately after job3 where we have a python script for sending a mail with some message.
5. JOB5: This job is used to monitor the container which is acting as a production server. In case it fails, this job will launch a new server within seconds.
After all the jobs have been set up, it is just the developer's work to push the code in the repository. The pipeline looks likes this.
For the confusion about docker commands, those commands actually run on host os using a small trick. I used chroot to switch the root to that of host os and run all the commands there and after running the required commands I switched back to the container.
This is an actual use-case which is used in industries and is actually a problem for some companies as well.
Future scope:
- Deploying a suitable environment based on the code pushed by the developer without knowing it manually. Here we would have to pull the suitable image from Dockerhub based on the interested file's extension.
- A better monitoring device for knowing the successful run of the production environment like Kubernetes, integration with it as well.