CI/CD Pipelining Using Docker

Retya Mahendra
4 min readNov 29, 2020

--

Continuous integration (CI) and continuous delivery (CD) embody a culture, set of operating principles, and collection of practices that enable application development teams to deliver code changes more frequently and reliably. The implementation is also known as the CI/CD pipeline.

CI/CD is one of the best practices for DevOps teams to implement. It is also an agile methodology best practice, as it enables software development teams to focus on meeting business requirements, code quality, and security because deployment steps are automated.

Why Docker ?

Docker is a DevOps platform that is basically used to create, deploy, and run applications using the concept of containerization. With Docker, developers can pack all the dependencies and libraries of an application easily and ship it out as a single package. This helps developers and operations teams mitigate the environmental issues that used to happen before. Developers can now be able to focus more on the features and deliverables than being concerned about the infrastructure compatibilities and configurations aspect of the platform. Further, this promotes the microservices architecture to help teams to build highly scalable applications.

Docker can be a common interface tool between developers and operations personnel, as stated in DevOps principles, eliminating a source of friction between the two teams. It also promotes the same image or binaries to be saved and used at every step of the pipeline throughout. Moreover, being able to deploy a thoroughly tested container without environment differences is the most significant advantage, and it ensures that no errors are introduced in the build process.

Understanding the WorkFlow

First, Let’s have simple project in our local machine and initialize git. Once done push our code to remote repository. My simple project located at https://gitlab.com/retya.mahendra/simple-web.git

File app.py is main process of this project

from flask import Flask, jsonify, request
app = Flask(__name__)
@app.route('/', methods=['GET'])
def get_tasks():
if request.environ.get('HTTP_X_FORWARDED_FOR') is None:
return jsonify({'ip': request.environ['REMOTE_ADDR']}), 200
else:
return jsonify({'ip': request.environ['HTTP_X_FORWARDED_FOR']}), 200
if __name__ == '__main__':
app.run(debug=True)

Thats file will automatically generate to docker image by reading the instructions from a Dockerfile. Inside Dockerfile contains all commands needed to build a given image.

FROM alpine
MAINTAINER Retya Mahendra <retya.mahendra@gmail.com>
COPY requirements.txt /tmp/requirements.txt
RUN apk add --no-cache python3 \
&& apk add --no-cache py3-pip \
&& pip install --upgrade pip \
&& pip install -r /tmp/requirements.txt
ENV APP_DIR /app
ENV FLASK_APP app.py
RUN mkdir ${APP_DIR}
COPY app ${APP_DIR}
VOLUME ${APP_DIR}
EXPOSE 5000
# Cleanup
RUN rm -rf /.wh /root/.cache /var/cache /tmp/requirements.txt
WORKDIR ${APP_DIR}
CMD ["/usr/bin/flask", "run", "--reload", "--host", "0.0.0.0"]

Next, We will write simple file gitlab-ci so that It will trigger and deploy our code to server.

stages:
- build
- deploy

docker-build:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
deploy:
stage: deploy
when: manual
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
script:
# Set right chmod on SSH key file
- chmod 400 $MASTER_SSH_KEY
# Login to Gitlab Container registry
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}" "docker login -u $CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Remove old containers and images if exists
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}" "docker rm -f ${CI_PROJECT_NAME} || true"
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}" "docker rmi \$(docker images -q ${DOCKER_REPO}) || true"
# Download and run new image
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}"
docker run
--name=$CI_PROJECT_NAME
--restart=always
-p 5000:5000
-d "$CI_REGISTRY_IMAGE"
#cp nginx conf to server
- scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i $MASTER_SSH_KEY -r nginx-conf/app-proxy.conf "${MASTER_SSH_USER}@${MASTER_HOST}":/etc/nginx/sites-available/
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}" "unlink /etc/nginx/sites-enabled/default" || NO_CONTAINER=1
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}" "ln -s /etc/nginx/sites-available/app-proxy.conf /etc/nginx/sites-enabled/app-proxy.conf" || NO_CONTAINER=1
#nginx configtest
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}" "nginx -t"
#nginx reload
- ssh -o StrictHostKeyChecking=no -i $MASTER_SSH_KEY "${MASTER_SSH_USER}@${MASTER_HOST}" "service nginx reload"
only:
- master

This project has two jobs:

  • build → used to build docker images from that app
  • deploy → used to deploy docker images to server

That job will trigger by new commit push to Gitlab repository

After successful job build we can access our server from our browser and see the changes made.

Conclusion

With the help of Docker containers, once an application is containerized, developers will be able to deploy the same container in a different environment. As the Container remains the same, the application will be running identical in all environments without causing any dependency confusion.

--

--

No responses yet