10 smart ways to use AWS CodeBuild

Moha Alsouli
14 min readJul 12, 2020

I cannot emphasise enough how useful AWS CodeBuild is, but I thought I would list “some” of its powerful and smart uses we have adopted in our pipelines; because at Tigerspike, our inquisitive nature brings insight and innovation.

Back in the day, and I’m talking a few years ago, like many companies, our CI was limited to Jenkins and a few build agents on EC2 instances. This was so limiting that, after a short time, maintaining the build agents with updates and different tools became a time consuming task so our development teams started creating their own AMIs and build agents. Of course, maintaining Jenkins and the shared executors across dozens of projects was getting out of hand too. This was troublesome and often a blocker that we had to ensure there is enough people around at all times with knowledge to maintain it.

But that’s not the case anymore. Hold my coffee, let me count…. I, myself, have built over 40 custom pipelines for different projects and different workstreams for different teams in just the last 2 years! Too many? No, not at all. Every workstream in every project has their own requirements.
Thanks to AWS Developer tools, my team and I can swiftly build CI/CD pipelines just as required for each project then handover to the delivery teams to update and maintain by themselves through IaC templates and scripts. Only complex updates get escalated to us, occasionally. And come future soon, the development teams will be building their own pipelines and maintaining them thanks to the professional development incentives at Tigerspike and the DevOps mentality program our senior Engineers are pushing through.

Anyway, enough with the backstory, let me jump straight into CodeBuild’s awesomeness.

As AWS puts it, CodeBuild is a fully managed CI tool that lets you compile source code, run tests, and produce packages that are ready to deploy.

So, how do we use CodeBuild at Tigerspike? Here’s my top 10 uses:
1. Builds, ad-hoc.
2. Builds, part of a CodePipeline.
3. Builds, docker.
4. Unit testing.
5. Automation testing.
6. Database migrations.
7. Database operations.
8.
CloudFormation Packaging (e.g. Nested Stacks)
9. Source Code repository tagging
10.
Deployment marking (e.g. NewRelic)

I won’t get into how to setup CodeBuild Projects. Check Getting Started or CloudFormation documentation for that, but it’s worth mentioning that CodeBuild can be integrated with CodePipeline, CodeCommit, S3, GitHub, and/or Bitbucket as source code providers or can be set up without a source.

Now let’s dive a little into each of these uses.

1. Builds, ad-hoc.

It’s called CodeBuild for a reason. Its main purpose is to build.
So, let’s assume we want to build a .Net Core package and simply put it in S3. First, to set up our CodeBuild Project, we have to choose one of the CodeBuild images that support our .Net Core version. See the list here. Then, we have to set up the source provider, CodeCommit for example.
Finally, add a buildspec.yml (the build script file) to your source code. The buildspec.yml for this job should simply look something like this:

version: 0.2
env:
variables:
S3_BUCKET: my-packages-bucket
phases:
install:
runtime-versions:
dotnet: 3.1
build:
commands:
- echo "CD into source folder, because my source code is tidy"
- cd src
- echo "Restore, Build, then Publish.."
- dotnet restore
- dotnet build --no-restore
- dotnet publish --no-restore --output outputDirectory
post_build:
commands:
- echo "Compressing the package.."
- cd ../outputDirectory
- zip -r myBuild.zip .
- echo "Uploading to S3.."
- aws s3 cp myBuild.zip s3://${S3_BUCKET}/myBuild.zip
- echo "Done."

Pretty straight forward, isn’t it?!
First, we defined the buildspec version (0.2 is recommended), then we defined the environment variable S3_BUCKET. Then, in the install phase, we specify the runtime we want CodeBuild to initialise. And luckily our image and runtime comes with zip and AWS CLI installed so we don’t need to run any other install commands.
In the build phase, we simply go into the source code folder, restore .Net, build then publish. I like running these commands separately in that order to give our teams space to explicitly call different options and configurations throughout the process and if they want, they can remove the explicit restore or do other things. It’s up to them, that’s the point ;)
Post build, we go into the output folder, zip the files then upload into S3.

As you might have imagined, this CodeBuild project will need an IAM Role with the right permissions to pull code from CodeCommit and put objects in S3.

2. Builds, part of a CodePipeline.

Let’s assume the same scenario as before. We want to build a .Net Core package but instead of uploading it to S3, we want to pass the artifacts to the next Action, or Stage, in CodePipeline. Our buildspec.yml will look like this:

version: 0.2phases:
install:
runtime-versions:
dotnet: 3.1
build:
commands:
- echo "CD into source folder, because my source code is tidy"
- cd src
- echo "Restore, Build, then Publish.."
- dotnet restore
- dotnet build --no-restore
- dotnet publish --no-restore --output outputDirectory
artifacts:
base-directory: outputDirectory
files:
- '**/*'

You might have noticed the buildspec above looks pretty similar to the previous one, except that there is no post build stage, nor environment variables declaring the S3 bucket. Instead, CodePipeline will pass the code to CodeBuild, and we declare what artifacts we want to pass back to CodePipeline when the job is done.
So, in the Artifacts section in this example, we specified the base directory of the artifacts, outputDirectory, and all files in the directory using '**/*' which represents all files recursively for CodeBuild.
If you want to select specific files, or files in multiple locations, you can skip the base-directory declaration and list your files instead with their relative paths to the working directory.

This CodeBuild project only needs an IAM Role with permissions to interact with CodePipeline.

3. Builds, docker

For this example, we will assume a different scenario: we want to build a Docker image and register it in Amazon Elastic Container Registry “ECR”. You can use these images then to deploy containers to AWS Elastic Container Service manually or through CodePipeline. Assuming we have a Dockerfile already in our repository, our buildspec.yml should look like this:

Note, you must enable Privileged Mode in your CodeBuild’s configurations to be able to use it to build docker images.

version: 0.2
env:
variables:
ECR_REPO: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapi
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo "Building a Docker image.."
- docker build -t myImage . --file Dockerfile
- echo "Tagging Docker image for ECR.."
- docker tag myImage ${ECR_REPO}:${CODEBUILD_SOURCE_VERSION}
- echo "Logging into ECR.."
- $(aws ecr get-login --no-include-email)
- echo "Pushing Docker image to ECR.."
- docker push ${ECR_REPO}:${CODEBUILD_SOURCE_VERSION}
- echo "Done."

In this scenario, we defined our environment variable ECR_REPO which is the ECR repository we want to register our images with. After that, we specify docker 18 runtime in the install phase then enter the build steps.
The build steps are standard docker build, docker tag and docker push commands, however, we need to log in to ECR before being able to push. The CLI command $(aws ecr get-login --no-include-email) does the trick.
Also, did you notice that we tagged our image with CODEBUILD_SOURCE_VERSION variable? This is one of the many variables available by default to your build environment. I like using the commit ID as a tag rather than latest for better management of deployments and rollbacks.

For this CodeBuild project, make sure it has an IAM Role with permissions to authenticate and upload images to ECR; and interact with your source provider.

4. Unit testing.

This one is simple.
Let’s assume we are building a Node.js package using npm, but before building and publishing the artifacts, we want to run unit tests.
Here’s what our buildspec.yml should look like:

version: 0.2phases:
install:
runtime-versions:
nodejs: 12
build:
commands:
- echo "Installing dependencies.."
- npm install
- echo "Unit testing.."
- npm test
- echo "Building.."
- npm build
artifacts:
base-directory: build
files:
- '**/*'

Did you notice the npm test command? CodeBuild will run the tests in your code and if the tests fail, CodeBuild will stop at this step. It won’t continue. But if it succeeded, it will pass the build artifacts to the next action or stage in CodePipeline, same as in example 2 previously.

This CodeBuild project only needs permissions to interact with CodePipeline.
Tip: consider setting up build notifications using CloudWatch and SNS with this job if you want to be notified on success or failure.

5. Automation testing.

For automation testing, our teams prefer to maintain the automation testing scripts in separate repositories than their applications repositories. So, just like what we do with ad-hoc builds, we trigger our automation test jobs on demand after successfully deploying to QA and/or UAT. The generated reports are sent to the team then.
For simplicity, let’s say our test scripts are written in Node.js which run automation tests and write reports to a local reports folder. After that, we want to upload these reports to S3 and notify the team through SNS. Of course, we do not want to hard code any secrets or endpoints in our scripts. We will store these in json format in AWS Secrets Manager and retrieve them before starting the tests into environment variables that our scripts can find.
This automated job saved one of our engineering teams 15 minutes for each run compared to when they were running it locally on their machines. Imagine, if we need to run tests 4 times a week, we’re saving an hour a week by running it on CodeBuild!

Here’s a simplified sample of a buildspec.yml for this job:

version: 0.2
env:
variables:
S3_BUCKET: my-reports-bucket
TOPIC_ARN: arn:aws:sns:us-east-1:123456789012:mySNSTopic
secrets-manager:
SECRET: automation-testing-secret
phases:
install:
runtime-versions:
nodejs: 12
pre_build:
commands:
- echo "Get current timestamp for reports naming.."
- TIME=$(date +"%Y%m%d_%H%M%S")
build:
commands:
- echo "Installing dependencies.."
- npm install
- echo "Testing.."
- npm test
post_build:
commands:
- echo "Zipping reports.."
- cd reports
- zip -r ${TIME}.zip .
- echo "Uploading zipped reports to S3.."
- aws s3 cp ${TIME}.zip s3://${S3_BUCKET}/${TIME}.zip
- echo "Notifying of the new reports.."
- aws sns publish --topic-arn ${TOPIC_ARN} --message "You can Download the new Automation Test reports from here: https://${S3_BUCKET}.s3-us-east-1.amazonaws.com/${TIME}.zip"
- echo "Done."

In this buildspec, our environment variables have both plain text variables as well as a reference to a secret stored in AWS Secrets Manager. For more info on how to reference variables in a buildspec file, see here.
The buildspec phases then are simple. In the install phase, we specify the Node.js version we want CodeBuild to run. In the pre build phase, we get the timestamp so we can use it to name our compressed reports folder. In the build phase, we install the dependencies then run the tests. Post build, we zip the reports, push them to S3 then notify the team by a SNS message. It’s a very straight forward process actually.

For this CodeBuild project, you will need an IAM Role that has permissions to retrieve secrets from Secrets Manager, put objects in S3 and publish to SNS.

6. Database migrations.

During development of new applications, our software engineers often need to migrate or update the databases schemas and tables. This is a job often forgotten in CI/CD pipelines, but the DevOps principles and best practices should be applied on all pieces of the puzzle. This is one of the critical tasks!
So, let’s assume we have a .Net Core build that runs as part of a CodePipeline, as in example 2 above. However, after the build, we also want CodeBuild to generate the SQL migrations script, connect to the database then run the script. And to do this, we’ll need 2 useful tools:
- Jq: a json parser (to parse the database connection details from json)
- MySQL client (to connect to our MySQL database e.g. Aurora).

Here’s what the buildspec.yml should look like:

version: 0.2
env:
secrets-manager:
SECRET: db-connection-secret
phases:
install:
runtime-versions:
dotnet: 3.1
commands:
- apt-get update
- apt-get install -y jq mysql-client
pre_build:
commands:
- echo "Getting Database details.."
- DBSERVER=$(echo $SECRET | jq -r '.DBendpoint')
- DBUser=$(echo $SECRET | jq -r '.DBuser')
- DBPassword=$(echo $SECRET | jq -r '.DBpassword')
build:
commands:
- echo "CD into source folder.."
- cd src
- echo "Restore, Build, then Publish.."
- dotnet restore
- dotnet build --no-restore
- dotnet publish --no-restore --output outputDirectory
- echo "Generate migrations script.."
- dotnet ef migrations script -o migration.sql --idempotent
post_build:
commands:
- echo "Connecting to ${DBSERVER} & running MySQL query.."
- mysql -h ${DBSERVER} -P 3306 -u ${DBUser} -p${DBPassword} mydatabase1 < migration.sql
- echo "Done."
artifacts:
base-directory: outputDirectory
files:
- '**/*'

In the buildspec above, we retrieve our database connection secret from AWS Secrets Manager to our environment variables. This secret should have the connection details in json format. After that, we told CodeBuild to run .Net Core 3.1 and install jq and MySQL client. Once installed, we parse the connection details from our json secret in the environment variables. Then, we will restore, build, and publish .Net then generate the SQL migrations script. Post build, we will connect to the database using the parsed details and pass on the migration script to the database to run. When finished, like before, we will pass the build artifacts to CodePipeline.

This CodeBuild project will need an IAM Role that has permissions to interact with Secrets Manager and CodePipeline. Also, this CodeBuild project will need to be able to connect to the database instance e.g. connect within the VPC.

7. Database operations.

Same as in the previous example, we want to perform database operations somewhere in our CI/CD pipeline. Let’s say for example we have one development database instance, but we want our big engineering team to share it. We don’t want everyone to login to the database to create new schemas (databases) and forget deleting them when their work is done. We want to automate this process. We want a new schema to be created for each feature branch and dropped when the branch is merged.

This is not complicated, it saves time and ensures proper clean ups. We’ve done this for one of our big projects which had a distributed team across multiple offices and it was amazing. All what we had to do was setup our version control to trigger a job when a new branch is created or when a branch is merged. And when triggering, we pass the branch name and either “create” or “drop” as environment variables. Any modern version control with pipeline capabilities can trigger CodeBuild through AWS CLI.
For reference, here is what the trigger CLI command looks like, including the branch name and the action required, if we trigger from Bitbucket:

aws codebuild start-build --project-name myProject --environment-variables-override "[{\"name\":\"ACTION\",\"value\":\"create\"},{\"name\":\"BRANCH\",\"value\":\"${BITBUCKET_BRANCH}\"}]"

And, here’s what the buildspec.yml should look like:

version: 0.2
env:
secrets-manager:
SECRET: db-connection-secret
phases:
install:
runtime-versions:
dotnet: 3.1
commands:
- apt-get update
- apt-get install -y jq mysql-client
post_build:
commands:
- echo "Getting Database details.."
- DBSERVER=$(echo $SECRET | jq -r '.DBendpoint')
- DBUser=$(echo $SECRET | jq -r '.DBuser')
- DBPassword=$(echo $SECRET | jq -r '.DBpassword')
- echo "Define the action required.."
- |
if [ ${ACTION} = "create" ]; then
SQL_QUERY="CREATE DATABASE IF NOT EXISTS \`${BRANCH}\`;";
elif [ ${ACTION} = "drop" ]; then
SQL_QUERY="DROP DATABASE \`${BRANCH}\`;";
else
echo "Action ${ACTION} is not allowed!" && exit 1;
fi
build:
commands:
- echo "Connecting to ${DBSERVER} & running MySQL Action.."
- mysql -h ${DBSERVER} -P 3306 -u ${DBUser} -p${DBPassword} -e "${SQL_QUERY}"
- echo "Done."

Same as in the previous example, we parse our database connection details from Secrets Manager through the environment variables. After parsing the secret, we define the SQL command we want to run based on the incoming $ACTION environment variable whether create or drop. We also include the $BRANCH name in the command to match the schema name with. Once the SQL command is defined, we connect to the database and run the command.
As mentioned, this job can run to create and to drop schemas. The schema name will match your feature branch name, so make sure branch naming is limited to characters that are supported by your database engine. Otherwise, you can add logic in your script to check the branch name and convert the unsupported characters.

This CodeBuild project only needs permissions to retrieve secrets from Secrets Manager, and network access to the database e.g. through the VPC.

8. CloudFormation Packaging (e.g. Nested Stacks)

We often, if not always, use CodePipeline integration with CloudFormation to deploy or update our stacks. This might be tricky if your CloudFormation stacks are organised in Nested Stacks. What you would usually do to deploy manually is use AWS CLI to package the templates into one which you can deploy. Fear not, this is easy to replicate in a CI/CD pipeline too.

Here’s a buildspec.yml example:

version: 0.2
env:
variables:
S3_BUCKET: myCloudformationBucket
REGION: us-east-1
phases:
build:
commands:
- echo "Packaging CloudFormation nested stacks.."
- aws cloudformation package --region $REGION --template-file mainTemplate.yaml --s3-bucket $S3_BUCKET --output-template-file packaged.yml
- echo "Done."
artifacts:
files:
- packaged.yml
- parameters/*.json

Very straight forward, in the buildspec above, we defined the bucket and region we want to upload the nested templates to so we can package CloudFormation. Then, we ran the package command. As you can tell, this CodeBuild job passes the packaged.yml template along with json type parameter files as artifacts to the next deployment action in CodePipeline. You do not have to use parameters or parameter files with CloudFormation, but it is a good practice so do it when it makes sense.

This CodeBuild project will need permissions to write to the specified S3 bucket (for CloudFormation) as well as permissions to interact with CodePipeline.

9. Source Code repository tagging

In some workflows, we’re required to tag the commit, in source control, which we have just deployed successfully to an environment. For example, after successfully deploying to Production, we want to tag the deployed commit back in source control with a prod tag.
To do this, CodeBuild will need a Git SSH key, which we will store in AWS Secrets Manager, and pass in through the environment variables.

Here’s what our buildspec.yml should look like:

version: 0.2
env:
variables:
REPO: myReactProject
secrets-manager:
GITSSH: git-ssh-key
phases:
pre_build:
commands:
- echo "Parsing the SSH key.."
- echo $GITSSH > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "Connecting to GitHub and cloning.."
- ssh -T git@github.com
- git clone git@github.com:myWorkspace/${REPO}.git
- cd ${REPO}
build:
commands:
- echo "Deleting old tag if exists.."
- git tag --delete prod
- git push origin --delete prod
- echo "Tagging and pushing the tag.."
- git tag -a -f prod $CODEBUILD_SOURCE_VERSION
- git push origin prod
- echo "Done."

Above, we defined our environment variables, i.e. the GitHub repository name and a reference to the Git SSH key stored in AWS Secrets Manager. CodeBuild then will write the SSH key to the SSH folder, connect to GitHub, then clone the repository. Once cloned, we will remove the tag if it already exists, tag the commit we deployed then push to GitHub. In this scenario, we used the available environment variable CODEBUILD_SOURCE_VERSION which is the commit ID passed by CodePipeline as explained here. If you don’t have the commit ID in the available variables, you can pass it with your build in a file.

This CodeBuild project needs permissions to retrieve secrets from Secrets Manager and to interact with CodePipeline.

10. Deployment marking (e.g. NewRelic)

One of the good practices to do post deployments is to mark the timeline of your monitoring system with the deployment time and version. This is a record that’ll help you see changes in your application’s performance and usage and correlate them with deployments. NewRelic, for example, allows you to record deployments through the REST API.

Below is an example buildspec.yml for this job:

version: 0.2
env:
secrets-manager:
APP_ID: newrelic-app-id
APIKey: newrelic-api-key
phases:
build:
commands:
- echo "Sending deployment mark to NewRelic.."
- curl -X POST 'https://api.newrelic.com/v2/applications/${APP_ID}/deployments.json' -H 'X-Api-Key:${API_KEY}' -i -H 'Content-Type:application/json' -d '{"deployment":{"revision":${CODEBUILD_SOURCE_VERSION},"description":"New deployment."}}'
- echo "Done."

This job simply sends a POST request to NewRelic using the APP ID and API Key retrieved from Secrets Manager. The POST request includes the required parameters including a revision and a description. We used CODEBUILD_SOURCE_VERSION or the commit ID as a revision in this example.

Again, this CodeBuild project needs permissions to retrieve secrets from Secrets Manager and to interact with CodePipeline.

Let’s recap.

In this post, we have listed the top 10 ways we utilise AWS CodeBuild, at Tigerspike:
1. Builds, ad-hoc.
2. Builds, part of a CodePipeline.
3. Builds, docker.
4. Unit testing.
5. Automation testing.
6. Database migrations.
7. Database operations.
8. CloudFormation Packaging (e.g. Nested Stacks)
9. Source Code repository tagging
10. Deployment marking (e.g. NewRelic)

We all agree that every project has its own requirements and every CI/CD is unique. But when you have such a great and easy to use tool available for you, you might as well make smarter use of it. Protect your secrets, automate jobs, reduce chances of error, and save time and effort.

Have you used CodeBuild in other smart ways?
If you have, share your use in the responses below.

--

--