How to create a docker image and deploy it using ECR & ECS?

How to create a docker image and deploy it using ECR & ECS?

Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of loading an application onto a virtual machine, as the application can be run on any suitable physical machine without any worries about dependencies.

Containerization has gained recent prominence with the open-source Docker. Docker containers are designed to run on everything from physical computers to virtual machines. In this article, we will walk through end to end tutorial starting from creating custom docker file and creating an image using it and run it using docker commands and using Amazon Elastic Container Service (ECS)

Step 1 — Create an Amazon EC2 Instance (Optional)

It is NOT mandatory to create EC2 instance, you can use any local/development machine having Ubuntu (I am using 18.04 LTS) to work with these steps.

Step 2 — Install Docker

Assuming that you have an instance with Ubuntu 18.04, the following is the list of commands to install Docker

First, update your existing list of packages

sudo apt update

Next, install a few prerequisite packages which let apt use packages over HTTPS

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Then add the GPG key for the official Docker repository to your system

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

Next, update the package database with the Docker packages from the newly added repo

sudo apt update

Make sure you are about to install from the Docker repo instead of the default Ubuntu repo

apt-cache policy docker-ce

Finally, install Docker

sudo apt install docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. You can check that with the following command.

sudo systemctl status docker

Step 3 — Get the Application Code

In this tutorial, I am going to run Node.JS based backend system on ECS and source code of the backend system was in private repository. So I have cloned that repository into /usr/backend/. So consider this as my working directory. You can choose your working directory accordingly.

Step 4 — Creating Dockerfile

To build a Docker image, you need to create a Dockerfile. Using that file, you can build a Docker image which can run on any platform without installing any libraries on the actual machine. Docker allows you to package an application with its environment and all of its dependencies into an encapsulated “box”, called a container.

So, the first thing is to create Dockerfile in your working directory. to do that you can use the following command

touch Dockerfile

It will create a file named Dockerfile without any extension in your working directory. Please ensure that you name it correctly as mentioned above. So far you have blank Dockerfile. You need to add Docker commands that will help to build a Docker Image

To edit Dockerfile, you can use any text editor. The following command will open the file in the default editor

nano Dockerfile

FROM

The FROM initializes a new build stage and sets the Base Image for subsequent instructions. As such, a valid Dockerfile must start with a FROM instruction. The image can be any valid image. In our example, the base image is of Ubuntu OS. The following instruction will be interpreted to build an image with Ubuntu

# Getting base image from DockerHub
FROM ubuntu

MAINTAINER

The MAINTAINER instruction sets the Author field of the generated images. This is optional but considered as best practice. Here is an example

# Adding maintainer details - Optional 
MAINTAINER Vivek Navadia <viveknavadia@gmail.com>

RUN

The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.

As we are building this image only using Ubuntu as base image and rest of the installation we need to do it manually we need to run few commands to install node.js. We can do that with RUN instrcution. Following are few examples of the same

# Updating packages
RUN apt-get update
RUN apt-get install curl -y
# Downloading Node 10.x
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
# Installing Node.JS
RUN apt install nodejs -y

WORKDIR

The WORKDIR instruction sets the working directory for any RUN, CMD andCOPY instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction. In our example, I am creating the following directory as working directory in the image that we are going to build

#Defineing Working Directory
WORKDIR /usr/app

COPY

The COPY instruction copies new files or directories from source and adds them to the filesystem of the container at the path. For Example

# Copying source code to Image in /user/app directory
COPY . /usr/app/

EXPOSE

The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. You can specify whether the port listens on TCP or UDP, and the default is TCP if the protocol is not specified. For Example

# Exposing TCP Protocol 
EXPOSE 3000

CMD

The main purpose of a CMD is to provide defaults for an executing container. When used in the shell or exec formats, the CMD instruction sets the command to be executed when running the image. For example

#Running NPM Start command to run node application
CMD [ "npm", "start" ]

We are now done with creating DockerfileHere is the complete set of instructions for our use case

# Getting base image from DockerHub
FROM ubuntu
# Adding maintainer details - Optional
MAINTAINER Vivek Navadia <viveknavadia@gmail.com>
# Updating packages
RUN apt-get update
# Installing CURL
RUN apt-get install curl -y
# Downloading Node 10.x
RUN curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
# Running Shell Script
RUN bash nodesource_setup.sh
# Installing Node.JS
RUN apt install nodejs -y
#Defineing Working Directory
WORKDIR /usr/app
# Copying source code to Image in /user/app directory
COPY . /usr/app/
# Installing node modules required to run code
RUN npm install
# Exposing TCP Protocol
EXPOSE 3000
#Running NPM Start command to run node application
CMD [ "npm", "start" ]

Step 5 — Building an Image

We have a set of instructions in Dockerfile. We need to execute these instructions so the image can be built having Ubuntu as OS, Node.JS and Source code. To do that we need to run following command

docker build [options] [imagename]:[tag][path]

For example

docker build -t myimage:latest .

It will execute instructions step by step and build an image. Following is the sample output of a successful image creation

Step 1/12 : FROM ubuntu
---> ccc6e87d482b
Step 2/12 : MAINTAINER Vivek Navadia <vivek.navadia@digi-corp.com>
---> Using cache
---> f499534bcce7
Step 3/12 : RUN apt-get update
---> Using cache
---> fc2c09bf8799
Step 4/12 : RUN apt-get install curl -y
---> Using cache
---> c07db7d020fa
Step 5/12 : RUN curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
---> Using cache
---> 4c0a5f65a3c7
Step 6/12 : RUN bash nodesource_setup.sh
---> Using cache
---> d865ab48548d
Step 7/12 : RUN apt install nodejs -y
---> Using cache
---> 379563564a94
Step 8/12 : WORKDIR /usr/app
---> Using cache
---> d6dcae236513
Step 9/12 : COPY . /usr/app/
---> 4ea08de7e99d
Step 10/12 : RUN npm install
---> Running in 009d8db0ae15
Step 11/12 : EXPOSE 3000
---> Running in 29ef990df348
Step 12/12 : CMD [ "npm", "start" ]
---> Running in d0c971294c12
Successfully built 3078e4ca72c6
Successfully tagged myimage:latest

You can list created images with the following command

docker images

Step 6 — Run/Test your Image

So far now, we are done with building an image which we will run on Amazon Elastic Container Service eventually, but before that, you can test it through the command line as well

docker run [options][name of the container][port]:[port] 
[imagename]:[tag]

For our example, here is the command to run

docker run -d --name test -p 3000:3000 myimage:latest

It will run the container having our application and can be accessible on port 3000. I am tying this in one of the EC2 instance having public IP so it is accessible with IP address having a port. if you are trying on the local machine you can check with http://localhost:3000/

So far, we have installed docker, created a docker image and build it. We also tested the image and application is running in a container. One can build such custom images based on need and launch it. Running such an image using AWS service is another advance level of containerization with serverless architecture which removes the need to provision and manage servers and improves security through various AWS Services

Step 7 — Creating a Repository in Elastic Container Registry (ECR)

To achieve an advanced level of Containerization using Amazon Elastic Container Service, The first thing is to make the built image accessible by ECS service and to do that we have to create a repository in ECR

  • Open https://aws.amazon.com/ and Login to Amazon EC2 Console
  • Search for ECR services
  • Click on Create Repository
  • Enter the name of repository of your choice (e.g. aws-ecs-demo). Keep rest of the setting as it is and click on Create Repository button
No alt text provided for this image
  • The repository will be created and you can see in the list of repositories
No alt text provided for this image
  • Copy URI and keep it for future use
076482949052.dkr.ecr.ap-south-1.amazonaws.com/aws-ecs-demo

Step 8 — Install AWS CLI in your local/EC2 Instance

Now we need to push created docker image to ECR repository and to do that we need to execute certain commands using AWS Command Line Interface (CLI). So first thing to make sure that aws cli is installed, if not install using the following commands

Download aws cli version 2 from URL

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

Make sure you have unzip installed on your machine, if not you can install it using following command

apt install unzip

Unzip downloaded aws cli version 2

unzip awscliv2.zip

Install aws cli on your machine

sudo ./aws/install

You can confirm the installation of aws cli using the following command

aws --version

Step 9— Create an IAM Role for ECR & Configure AWS CLI

Accessing AWS services requires user having an IAM Role. So we need to create a user having a policy for usage of ECR and generate Access Key and Access Secret. To access ECR service there is a policy called AmazonEC2ContainerRegistryFullAccess. We need to create a user with this policy. Please check following for the same

  • Go to IAM and click on Add User
  • Add User Name and select Programmatic Access and click on Next
No alt text provided for this image
  • Select Attach existing policies directly and select AmazonEC2ContainerRegistryFullAccess
No alt text provided for this image
  • Click on Next will allow adding tags (optional)
  • Create User will create a user with AmazonEC2ContainerRegistryFullAccess policy
  • After user creation, you will get the Access Key and Access Secret. Download/Copy it for future use.
  • Go back to terminal and type aws configure and add the following information
AWS Access Key ID [None]:[Paste Access Key]
AWS AWS Secret Access Key [None]: [Paste Access Secret Key]
Default region name [None]: [Enter Region e.g. ap-south-1]
Default output format [None]:json
  • This will configure aws cli with user access that we just created. so now we can invoke commands and access AWS ECR service

Step 10— Push Docker Image to ECR Repository

Now, it is time to deploy that docker image that we created in Step 5 in ECR Repository that we created in Step 7.

  • Tag exiting image with Repository URI that we copied earlier in Step 7
docker tag [imagename]:[tag] [repository URI]
  • Example
docker tag myimage:latest 076482949052.dkr.ecr.ap-south-1.amazonaws.com/aws-ecs-demo
  • Login to AWS service using the command line
aws ecr aws ecr get-login-password | docker login --username AWS --password-stdin [account_id].dkr.ecr.[region].amazonaws.com
  • Example
aws ecr get-login-password | docker login --username AWS --password-stdin 076482949052.dkr.ecr.ap-south-1.amazonaws.com

Output

Login Succeeded
  • Now, we are logged and we have access to AWS ECR service through the command line. we just need to push a local image to AWS ECR repository
docker push [Repository URI]
  • Example
docker push 076482949052.dkr.ecr.ap-south-1.amazonaws.com/aws-ecs-demo
  • Output
84ab5ef5037d: Pushed
be27e5f6890f: Pushed
bab133b4f157: Pushed
8e7496536e75: Pushed
793bbe9d0b13: Pushed
7918da823616: Pushed
6c59d8fec4e3: Pushed
  • You can check that the image you pushed is available in ECR Repository from AWS Console
No alt text provided for this image
  • Copy Image URI for future use

Step 11 — Deploy Image using AWS ECS

We have now docker image available in AWS ECR repository and can be used in ECS to deploy.

  • Create a Cluster and Select the template from the list. We are going to use AWS Fargate to leverage AWS managed services. So that we dont need to create EC2 instance and configure it for deploying this image
No alt text provided for this image
  • Enter the cluster name and keep rest of the options as it is and create it.
No alt text provided for this image
  • Go to Task Definitions and Click on Create New Task Definition and select FARGATE
  • Add Task Definition Name, Task Execution Role, Task Memory and Task CPU.
  • Keep the rest of the values as it is.
No alt text provided for this image
  • Click on Add Container and add following details
  • Container Name
  • Image (URI that we copied in Step 10)
  • Memory Limits (e.g. Soft Limit : 128 GB)
  • Port Mappings (e.g. The port that we exposed while building Image. In our case it is 3000)
  • Save It and Finish Task Creation
  • Once Task is created. Select Task Definition and Click on Run Task
No alt text provided for this image
  • Enter the following details before you run the task
  • Lanch Type (FARGATE)
  • Platform Version (Latest)
  • Cluster VPC
  • Subnets (Select At-least Two)
  • Security Group (Make sure you have 3000 port accessible in security group)
  • Keep rest of the things as it is and Run Task
No alt text provided for this image


  • Once Task is in running state check the task details and you can find public IP auto-assigned by ECS service
  • Access the running instance with http://[Public IP Assigned by ECS]:3000

So Application is now running in a serverless environment using ECS and ECR.

Hope this end to end tutorial will help you to create, build and deploy docker images on AWS ECS.

Happy Containerization!!


To view or add a comment, sign in

More articles by Vivek Navadia

Others also viewed

Explore content categories