Everything about AWS ECS(with hands-on)

anubhav jhalani
5 min readDec 1, 2022

--

This is second article of the series Everything about AWS ECS where I am going to explain the next Component Cluster. For the other articles in this series please click on following links:

  1. ECS Overview and Task Definition
  2. Cluster
  3. Service
  4. Load Testing
  5. CI/CD Pipeline

2. Cluster

Cluster is basically a group of EC2 instances on which tasks are running as standalone or run by the Service. Lets start with the first set of configurations in the cluster:

Cluster Name : Here you can specify the name of the cluster which can be anything.

VPC and Subnets : Here you specify in which VPC and subnets you want to run your EC2 instances. I chose public subnets.

Further configurations:

Infrastructure : I am selecting EC2 Instances because I want to configure everything manually. Here I specify to create a new EC2 AutoScalingGroup for Cluster which will register the instances in the Cluster automatically and also scale up and down the number of registered instances automatically based on a CloudWatch Metric which I will explain later.
Then I specify the Operating system/Architecture as Amazon Linux 2. Remember that I also specified the Operating system/Architecture in the task definition. We defined the same Operating system/Architecture in our Cluster as well so that our task can run on the EC2 Instances of the Cluster.
For EC2 Instance Type I selected t2.micro which has 1024 CPU Units and 1GB Memory available. Remember, in the task definition we defined 0.25(=256) CPU Units and 512Mb Memory on task level which is less than the CPU and memory available on t2.micro. This means that our t2.micro instance can run at least one instance of that task definition.
Then I specify the minimum and maximum number of instances I want to run after scale in and scale out respectively. These instances will run inside the VPC and subnets defined above.
For troubleshooting purposes, I am using a SSH Key.

Monitoring : I enable Container Insights which automatically collects, aggregates, and summarizes Amazon ECS metrics and logs. It provides the CPU and memory utilization Read and write storage, Network transmit and receive rates etc. for clusters, services and tasks which are in the Running state. It should be enabled for good observability of ECS and we will see this in action later.

Now click on create and it will create the following cluster with one registered instance up and running because we defined minimum 1 instance in the Cluster’s AutoScaling Group above.

We can also see that there is no Service and Task running inside the cluster. Same can be seen by clicking on Services and Tasks Tab.
Now comes the interesting part. Click on the Infrastructure tab and you will see the AutoScaling Group and the only instance ran by that AutoScaling Group.

Now lets focus on the encircled parameters one by one:

Desired Size : This is the parameter set by AutoScaling Group during Scaling out or in. For example, during scale out if AutoScaling Group decides to increase the number of instances from 1 to 2 then this Desired Size will be set to 2 automatically.

Memory available : t2.micro has 1GB of memory but here it shows only 982MB is available. This is because some amount of memory is consumed by instance’s core processes and EC2 Container Agent which is running inside it. You can connect to the instance via Session Manager and run the docker ps command and you will see the EC2 Container Agent running there.

ASG : This is the AutoScaling Group running inside the cluster and responsible for scaling in-out and registering the instances inside the cluster. AutoScaling Group is a type of a capacity provider which will provide capacity(EC2 Instances) to the cluster to run the tasks. Right now it is running one EC2 instance which is shown under Container Instances. Clicking on the ASG will open a new window with more details about ASG. I am assuming that you are familiar with all the configurations of an AutoScaling Group. What I would like to point out here is the Automatic Scaling tab.

You can see that there is Target Tracking scaling on metric CapacityProviderReservation=100. This means that Auto Scaling group scales the number of instances to keep this metric value at or near 100. Now the question is what is the meaning of CapacityProviderReservation and how its value is calculated? There is a great article explaining this but to summarize in simple words:

— If CapacityProviderReservation=100 then it means that there is enough number of EC2 are running which are required to run required tasks by the Service. So scaling is not required.

— If CapacityProviderReservation>100 then it means that more number of EC2 instances are required to run the required tasks by the Service. So AutoScaling Group scales out.

— If CapacityProviderReservation<100 then it means that there are more than required number of EC2 instances are running to run required tasks by the Service. So AutoScaling Group scales in.

Now you might be wondering that we didn’t define such scaling policy while defining the cluster. The answer is, this policy is automatically created by the ECS while creating a cluster.

Amazon ECS also automatically creates and manages the CloudWatch alarms that start the above Target Tracking scaling and calculates the scaling adjustment based on the target value. You can go to CloudWatch alarms and there you will see 2 alarms:

One more important thing to notice is that if you click on the Instance Id under Container Agent and then click on Edit User data

then you will see a script:

This is the script which allows the EC2 Instances to get registered inside the cluster. This script is also automatically added by AutoScaling Group.

Now we have a task definition ready and a running cluster to run the tasks so we will now schedule a Service to run and maintain a specified number of Tasks simultaneously on the EC2 instances of the cluster.

My next article will explain the component Service.

--

--