In an ideal world, AWS deployments would seamlessly leverage cloud-native services, from AWS CodePipeline to AWS CodeDeploy. However, real-world scenarios often demand flexibility and creative solutions. This tutorial aims to guide you through automating AWS deployments when traditional pipeline services don’t perfectly align with your tech stack or use case. Note that we won’t delve into the complete CI/CD pipeline implementation here.


Before diving into the details, ensure you have the following prerequisites:

  • Image containing application code.
  • CodeBuild project
  • ECS task definition
  • ECS Cluster
  • ECS Service

The Challenge

While setting up lower environments for a recent project, we faced a couple of challenges. The production environment required full automation with CodeDeploy’s blue/green deployments. However, this approach was not cost-effective for lower environments. We needed an alternative solution that would trigger a new deployment with each push to the main branch, allowing real-time testing of changes in the development environment.

Additionally, we were using Terraform for Infrastructure as Code (IaC) orchestration, specifically Terraform Cloud. Our pipelines were in a specific pipeline infrastructure “workspace”, and our application services (in this case, “ECS Fargate”) were in a separate workspace. This setup made it difficult to implement an automated pipeline due to dependencies between the larger workspaces.

Our solution? Using bash in conjunction with AWS CLI commands to create the buildspec file.

The Buildspec Breakdown

The buildspec can be broken down into five key stages:

1. Environment Variables Definition

version: 0.2
    - REGION

2. Pre-Build Phase

Next, we authenticate to the ECR repository that contains the application image. The ACCOUNT_ID will be unique to your AWS account. This command can be found within your ECR UI on the console.

      - echo Logging in to Amazon ECR...
      - aws ecr get-login-password --region ${REGION} | docker login --username AWS --password-stdin {ACCOUNT_ID}

3. Build Phase

In this step, we utilize one of the best functions of the AWS CLI: the –query parameter. This function extracts information from predefined resources. Essentially, it queries the old task definition to extract different parameters and stores them as local bash variables for use in the new deployment.

Use the –query function of AWS CLI to extract information from the underlying JSON where the task definition is defined. Store the required parameters as bash variables. Define a new variable, newTaskRev, by incrementing the value of oldTaskRev using an incremental operator. For instance, if oldTaskRev = 1, then newTaskRev = 2.

      - echo Fetching the task definition...
      - taskDef=$(aws ecs describe-task-definition --task-definition $TASK_DEFINITION --region $REGION --query "taskDefinition.containerDefinitions")
      - roleTaskExec=$(aws ecs describe-task-definition --task-definition $TASK_DEFINITION --region $REGION --query "taskDefinition.executionRoleArn" --output text)
      - roleTaskArn=$(aws ecs describe-task-definition --task-definition $TASK_DEFINITION --region $REGION --query "taskDefinition.taskRoleArn" --output text)
      - cpuVal=$(aws ecs describe-task-definition --task-definition $TASK_DEFINITION --region $REGION --query "taskDefinition.cpu" --output text)
      - memoryVal=$(aws ecs describe-task-definition --task-definition $TASK_DEFINITION --region $REGION --query "taskDefinition.memory" --output text)
      - oldTaskRev=$(aws ecs describe-task-definition --task-definition $TASK_DEFINITION --region $REGION --query "taskDefinition.revision" --output text)
      - newTaskRev=$(($oldTaskRev + 1))

4. Registering the New Task Definition

Register the new task definition using all the extracted variables from the previous step with the following command:

aws ecs register-task-definition --family $TASK_DEFINITION --container-definitions "$taskDef" --region $REGION --network-mode "awsvpc" --requires-compatibilities "FARGATE" --execution-role-arn "$roleTaskExec" --task-role-arn "$roleTaskArn" --cpu "$cpuVal" --memory "$memoryVal"

5. Updating the Service (Deployment)

Finally, the service is updated using the ecs update-service API. This final step deploys the new task definition containing the updated application code:

    - echo Creating the service...
    - aws ecs update-service --cluster $CLUSTER --service $SERVICE_NAME --task-definition $TASK_DEFINITION:$newTaskRev

Terraform Integration

For Terraform users, defining environment variables in their Terraform code and passing in the buildspec file using the file() function is a great approach. This function serves as an alternative to storing application-specific code at the root of your infrastructure repository:

buildspec = file("buildspec/application/ecs_automated_deployment.yml")
environment_variables = {

Trigger deployment with Eventbridge

Lastly, you can implement an EventBridge trigger to trigger this CodeBuild project upon a successful image push to ECR:

resource "aws_cloudwatch_event_rule" "ecs_deploy_via_codebuild" {
  name = "ecsdeployment"
  description = "Event rule for ecs deployment"
  event_pattern = jsonencode({
    source = ["aws.ecr"]
    detail-type = ["ECR Image Action"]
    detail = {
      "action-type" = ["PUSH"]
      "result" = ["SUCCESS"]
      "repository-name" = ["${YOUR_REPOSITORY_ARN}"]
      "image-tag" = ["latest"]

resource "aws_cloudwatch_event_target" "codebuild_ecs_deploy_target" {
  rule =
  arn = {Codebuild_ARN}
  target_id = "codebuild-target"
  role_arn = {CODEBUILD_ROLE_ARN}

This setup triggers the CodeBuild project whenever you successfully push an image to ECR. This way, your deployments are always up-to-date with the latest application code. Hopefully this has helped with any issues you may have automating AWS deployments. Happy deploying!

Leave a Reply

Your email address will not be published. Required fields are marked *