10 smart ways to use AWS CodeBuild

Anyway, enough with the backstory, let me jump straight into CodeBuild’s awesomeness.

As AWS puts it, CodeBuild is a fully managed CI tool that lets you compile source code, run tests, and produce packages that are ready to deploy.

Now let’s dive a little into each of these uses.

1. Builds, ad-hoc.

It’s called CodeBuild for a reason. Its main purpose is to build.
So, let’s assume we want to build a .Net Core package and simply put it in S3. First, to set up our CodeBuild Project, we have to choose one of the CodeBuild images that support our .Net Core version. See the list here. Then, we have to set up the source provider, CodeCommit for example.
Finally, add a buildspec.yml (the build script file) to your source code. The buildspec.yml for this job should simply look something like this:

version: 0.2
env:
variables:
S3_BUCKET: my-packages-bucket
phases:
install:
runtime-versions:
dotnet: 3.1
build:
commands:
- echo "CD into source folder, because my source code is tidy"
- cd src
- echo "Restore, Build, then Publish.."
- dotnet restore
- dotnet build --no-restore
- dotnet publish --no-restore --output outputDirectory
post_build:
commands:
- echo "Compressing the package.."
- cd ../outputDirectory
- zip -r myBuild.zip .
- echo "Uploading to S3.."
- aws s3 cp myBuild.zip s3://${S3_BUCKET}/myBuild.zip
- echo "Done."

2. Builds, part of a CodePipeline.

Let’s assume the same scenario as before. We want to build a .Net Core package but instead of uploading it to S3, we want to pass the artifacts to the next Action, or Stage, in CodePipeline. Our buildspec.yml will look like this:

version: 0.2phases:
install:
runtime-versions:
dotnet: 3.1
build:
commands:
- echo "CD into source folder, because my source code is tidy"
- cd src
- echo "Restore, Build, then Publish.."
- dotnet restore
- dotnet build --no-restore
- dotnet publish --no-restore --output outputDirectory
artifacts:
base-directory: outputDirectory
files:
- '**/*'

3. Builds, docker

For this example, we will assume a different scenario: we want to build a Docker image and register it in Amazon Elastic Container Registry “ECR”. You can use these images then to deploy containers to AWS Elastic Container Service manually or through CodePipeline. Assuming we have a Dockerfile already in our repository, our buildspec.yml should look like this:

version: 0.2
env:
variables:
ECR_REPO: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapi
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo "Building a Docker image.."
- docker build -t myImage . --file Dockerfile
- echo "Tagging Docker image for ECR.."
- docker tag myImage ${ECR_REPO}:${CODEBUILD_SOURCE_VERSION}
- echo "Logging into ECR.."
- $(aws ecr get-login --no-include-email)
- echo "Pushing Docker image to ECR.."
- docker push ${ECR_REPO}:${CODEBUILD_SOURCE_VERSION}
- echo "Done."

4. Unit testing.

This one is simple.
Let’s assume we are building a Node.js package using npm, but before building and publishing the artifacts, we want to run unit tests.
Here’s what our buildspec.yml should look like:

version: 0.2phases:
install:
runtime-versions:
nodejs: 12
build:
commands:
- echo "Installing dependencies.."
- npm install
- echo "Unit testing.."
- npm test
- echo "Building.."
- npm build
artifacts:
base-directory: build
files:
- '**/*'

5. Automation testing.

For automation testing, our teams prefer to maintain the automation testing scripts in separate repositories than their applications repositories. So, just like what we do with ad-hoc builds, we trigger our automation test jobs on demand after successfully deploying to QA and/or UAT. The generated reports are sent to the team then.
For simplicity, let’s say our test scripts are written in Node.js which run automation tests and write reports to a local reports folder. After that, we want to upload these reports to S3 and notify the team through SNS. Of course, we do not want to hard code any secrets or endpoints in our scripts. We will store these in json format in AWS Secrets Manager and retrieve them before starting the tests into environment variables that our scripts can find.
This automated job saved one of our engineering teams 15 minutes for each run compared to when they were running it locally on their machines. Imagine, if we need to run tests 4 times a week, we’re saving an hour a week by running it on CodeBuild!

version: 0.2
env:
variables:
S3_BUCKET: my-reports-bucket
TOPIC_ARN: arn:aws:sns:us-east-1:123456789012:mySNSTopic
secrets-manager:
SECRET: automation-testing-secret
phases:
install:
runtime-versions:
nodejs: 12
pre_build:
commands:
- echo "Get current timestamp for reports naming.."
- TIME=$(date +"%Y%m%d_%H%M%S")
build:
commands:
- echo "Installing dependencies.."
- npm install
- echo "Testing.."
- npm test
post_build:
commands:
- echo "Zipping reports.."
- cd reports
- zip -r ${TIME}.zip .
- echo "Uploading zipped reports to S3.."
- aws s3 cp ${TIME}.zip s3://${S3_BUCKET}/${TIME}.zip
- echo "Notifying of the new reports.."
- aws sns publish --topic-arn ${TOPIC_ARN} --message "You can Download the new Automation Test reports from here: https://${S3_BUCKET}.s3-us-east-1.amazonaws.com/${TIME}.zip"
- echo "Done."

6. Database migrations.

During development of new applications, our software engineers often need to migrate or update the databases schemas and tables. This is a job often forgotten in CI/CD pipelines, but the DevOps principles and best practices should be applied on all pieces of the puzzle. This is one of the critical tasks!
So, let’s assume we have a .Net Core build that runs as part of a CodePipeline, as in example 2 above. However, after the build, we also want CodeBuild to generate the SQL migrations script, connect to the database then run the script. And to do this, we’ll need 2 useful tools:
- Jq: a json parser (to parse the database connection details from json)
- MySQL client (to connect to our MySQL database e.g. Aurora).

version: 0.2
env:
secrets-manager:
SECRET: db-connection-secret
phases:
install:
runtime-versions:
dotnet: 3.1
commands:
- apt-get update
- apt-get install -y jq mysql-client
pre_build:
commands:
- echo "Getting Database details.."
- DBSERVER=$(echo $SECRET | jq -r '.DBendpoint')
- DBUser=$(echo $SECRET | jq -r '.DBuser')
- DBPassword=$(echo $SECRET | jq -r '.DBpassword')
build:
commands:
- echo "CD into source folder.."
- cd src
- echo "Restore, Build, then Publish.."
- dotnet restore
- dotnet build --no-restore
- dotnet publish --no-restore --output outputDirectory
- echo "Generate migrations script.."
- dotnet ef migrations script -o migration.sql --idempotent
post_build:
commands:
- echo "Connecting to ${DBSERVER} & running MySQL query.."
- mysql -h ${DBSERVER} -P 3306 -u ${DBUser} -p${DBPassword} mydatabase1 < migration.sql
- echo "Done."
artifacts:
base-directory: outputDirectory
files:
- '**/*'

7. Database operations.

Same as in the previous example, we want to perform database operations somewhere in our CI/CD pipeline. Let’s say for example we have one development database instance, but we want our big engineering team to share it. We don’t want everyone to login to the database to create new schemas (databases) and forget deleting them when their work is done. We want to automate this process. We want a new schema to be created for each feature branch and dropped when the branch is merged.

aws codebuild start-build --project-name myProject --environment-variables-override "[{\"name\":\"ACTION\",\"value\":\"create\"},{\"name\":\"BRANCH\",\"value\":\"${BITBUCKET_BRANCH}\"}]"
version: 0.2
env:
secrets-manager:
SECRET: db-connection-secret
phases:
install:
runtime-versions:
dotnet: 3.1
commands:
- apt-get update
- apt-get install -y jq mysql-client
post_build:
commands:
- echo "Getting Database details.."
- DBSERVER=$(echo $SECRET | jq -r '.DBendpoint')
- DBUser=$(echo $SECRET | jq -r '.DBuser')
- DBPassword=$(echo $SECRET | jq -r '.DBpassword')
- echo "Define the action required.."
- |
if [ ${ACTION} = "create" ]; then
SQL_QUERY="CREATE DATABASE IF NOT EXISTS \`${BRANCH}\`;";
elif [ ${ACTION} = "drop" ]; then
SQL_QUERY="DROP DATABASE \`${BRANCH}\`;";
else
echo "Action ${ACTION} is not allowed!" && exit 1;
fi
build:
commands:
- echo "Connecting to ${DBSERVER} & running MySQL Action.."
- mysql -h ${DBSERVER} -P 3306 -u ${DBUser} -p${DBPassword} -e "${SQL_QUERY}"
- echo "Done."

8. CloudFormation Packaging (e.g. Nested Stacks)

We often, if not always, use CodePipeline integration with CloudFormation to deploy or update our stacks. This might be tricky if your CloudFormation stacks are organised in Nested Stacks. What you would usually do to deploy manually is use AWS CLI to package the templates into one which you can deploy. Fear not, this is easy to replicate in a CI/CD pipeline too.

version: 0.2
env:
variables:
S3_BUCKET: myCloudformationBucket
REGION: us-east-1
phases:
build:
commands:
- echo "Packaging CloudFormation nested stacks.."
- aws cloudformation package --region $REGION --template-file mainTemplate.yaml --s3-bucket $S3_BUCKET --output-template-file packaged.yml
- echo "Done."
artifacts:
files:
- packaged.yml
- parameters/*.json

9. Source Code repository tagging

In some workflows, we’re required to tag the commit, in source control, which we have just deployed successfully to an environment. For example, after successfully deploying to Production, we want to tag the deployed commit back in source control with a prod tag.
To do this, CodeBuild will need a Git SSH key, which we will store in AWS Secrets Manager, and pass in through the environment variables.

version: 0.2
env:
variables:
REPO: myReactProject
secrets-manager:
GITSSH: git-ssh-key
phases:
pre_build:
commands:
- echo "Parsing the SSH key.."
- echo $GITSSH > ~/.ssh/id_rsa
- chmod 400 ~/.ssh/id_rsa
- echo "Connecting to GitHub and cloning.."
- ssh -T git@github.com
- git clone git@github.com:myWorkspace/${REPO}.git
- cd ${REPO}
build:
commands:
- echo "Deleting old tag if exists.."
- git tag --delete prod
- git push origin --delete prod
- echo "Tagging and pushing the tag.."
- git tag -a -f prod $CODEBUILD_SOURCE_VERSION
- git push origin prod
- echo "Done."
version: 0.2
env:
secrets-manager:
APP_ID: newrelic-app-id
APIKey: newrelic-api-key
phases:
build:
commands:
- echo "Sending deployment mark to NewRelic.."
- curl -X POST 'https://api.newrelic.com/v2/applications/${APP_ID}/deployments.json' -H 'X-Api-Key:${API_KEY}' -i -H 'Content-Type:application/json' -d '{"deployment":{"revision":${CODEBUILD_SOURCE_VERSION},"description":"New deployment."}}'
- echo "Done."

Let’s recap.

In this post, we have listed the top 10 ways we utilise AWS CodeBuild, at Tigerspike:
1. Builds, ad-hoc.
2. Builds, part of a CodePipeline.
3. Builds, docker.
4. Unit testing.
5. Automation testing.
6. Database migrations.
7. Database operations.
8. CloudFormation Packaging (e.g. Nested Stacks)
9. Source Code repository tagging
10. Deployment marking (e.g. NewRelic)

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store