Docker has become an ubiquitous way to ship code. The power of docker is in bundling all dependencies together so that the code can run in any environment. The meaning of "Build once, Run anywhere" truly comes to life once you start using it.

Usually an application is dependent on multiple services, each of which should run in their own container. And this is where docker-compose comes in. You can define how each service interacts with each other in a simple to understand file. And using that your application can be brought to life with a single command. Kubernetes is an obvious competitor, but for small applications, Kubernetes become an overkill, whereas anyone new to docker compose can get up and running with it within an hour.

When it comes to deploying docker-compose, AWS Elastic Container Service is a great choice. It provides a compatibility layer with compose and using Elastic Load Balancer you can make sure that your application is capable of scaling to any amount of users.

But its hard to find one single guide which helps you to do that. This tutorial aims to solve that. In this tutorial we will deploy a sample Flask application and its redis dependency using AWS ECS.

1. Setup the sample project on your system

For this project, I created a simple flask application (Repo Link )

  • Download or clone the repo to your system.
    git clone https://github.com/pankajkgarg/ecs-tutorial.git
  • Install docker and docker-compose
  • Try running the application locally to make sure it works
cd ecs-tutorial
docker-compose -f docker-compose.local.yml  up --build
  • Open http://localhost in your browser
  • You should see a welcome page
  • Press Ctrl+C to stop the containers and run the following command to remove the containers
    docker-compose -f docker-compose.local.yml down

2. Setup ECR Repository and push code to your repo

We will use AWS ECR to store our docker images in AWS. Moreover its the easiest thing to setup.

  • Go to ECR in AWS Console.
  • Create a repo. (You just need to provide a suitable name)
  • Copy the URI of the repo.
  • It looks like 7479080XXXXX.dkr.ecr.us-east-2.amazonaws.com/ecs-tutorial

Now that the ECR Repo is setup, you can push code to it. But first you need to setup AWS CLI so that you can interact with AWS from your command line. You should have pip installed on your system.

  • Install using pip3 pip3 install --upgrade --user awscli
  • Setup credentials in ~/.aws/credentials
[default]
	aws_access_key_id = <your_aws_access_key>
	aws_secret_access_key = <your_aws_secret_key>
  • If you are using some other aws profile name instead of default, then you need to select that profile in your shell environment. Run export AWS_DEFAULT_PROFILE=<your_aws_profile>

Finally, you can push your docker images to the ECR repo.

# Authenticate Docker client to your registry.
$(aws ecr get-login --no-include-email --region us-east-2)

# Build the docker image
docker build -t ecs-tutorial .

# Tag your docker image with ECR Repo URI (Replace the URI with your own repo)
docker tag ecs-tutorial:latest 7479080XXXXX.dkr.ecr.us-east-2.amazonaws.com/ecs-tutorial:latest

# Push the image
docker push 7479080XXXXX.dkr.ecr.us-east-2.amazonaws.com/ecs-tutorial:latest

3. Setup ECS Cluster

  • Open AWS Console, Go to ECS -> Create Cluster. Choose "EC2 Linux + Networking"
  • We will be using Spot t2a.nano instance with Linux 2 AMI for this tutorial. The minimum EBS storage we can choose is 22 GB, which is more than enough for this project.
    ecs-cluster-setup
  • Choose a key pair, so that you can login to the instance if required
  • Networking
    • Select existing VPC and select all subnets
    • Use the security group of your choice, just make sure that port 80 is accessible from any IP.
  • Container instance IAM role
    • You can either create a new role with "AmazonEC2ContainerServiceforEC2Role " policy attached
    • Or, you can let ECS automatically create one for you
  • Click "Create", this would complete the cluster setup.

4. Setup environment variables in AWS SSM

Keeping secrets out of code is an essential security practice. While running code locally environment variables can be kept in .env file. But this becomes tricky when it comes to deploying the same to ECS cluster.

Thankfully solution comes in the form of AWS Systems Manager -> Parameter Store. You can enter the variables in Parameter store, and they will be automatically made available as environment variables to the deployed instances.

In our sample project, we are using only one environment variable FLASK_ENV. Lets configure it in SSM using command line
aws ssm put-parameter --region us-east-2 --type String --overwrite --name "/web/flask_env" --value "production"

Above, we are using the --type String but if you have sensitive data such as API keys etc, then its better to store them using --type SecureString. The secrets will be encrypted using KMS managed keys

You can check that the parameter was created successfully by running the following
aws ssm get-parameters-by-path --region us-east-2 --recursive --path "/"

The mapping from /web/flask_env to FLASK_ENV environment variable need to be present in file ecs-params.yml. Open the file and it is present in task_definition / services / web / secrets

        - value_from: /web/flask_env
          name: FLASK_ENV

You need to repeat this for every environment variable.

5. Setup IAM role for ECS

ECS container agent running in EC2 instances need to call ECS API to function properly for example: to fetch ECR images, get SSM parameters etc.

For this, we need to create a new Role

  • In AWS console, go to IAM -> Roles -> Create Role.
  • In "AWS Service", choose "Elastic container service" and in use case select "Elastic Container Service Task". Click "Next: Permissions"
  • Next in attach permission policies, attach the "AmazonECSTaskExecutionRolePolicy" and "AmazonSSMFullAccess"
  • Set Role name as "ecsTaskExecutionRole" and click "Create Role"

6. Deploy your compose file on ECS

Check the status of your cluster
aws ecs describe-clusters --region us-east-2 --cluster ecs-tutorial

If it responds with "status": "ACTIVE" you are good!

Start the service on ECS Cluster

ecs-cli compose --project-name ecs-tutorial-taskdef  --file docker-compose.ecs.yml --ecs-params ecs-params.yml --region us-east-2 --cluster ecs-tutorial  service up

# To update the service with latest image
ecs-cli compose --project-name ecs-tutorial-taskdef  --file docker-compose.ecs.yml --ecs-params ecs-params.yml --region us-east-2 --cluster ecs-tutorial  service up  --deployment-min-healthy-percent 0  --create-log-groups   --force-deployment

In the above command, we are choosing --deployment-min-healthy-percent as 0, because there is only one server in the cluster. If there are multiple servers, then the service can be updated without any downtime.