-- Leo's gemini proxy
-- Connecting to capsule.adrianhesketh.com:1965...
-- Connected
-- Sending request
-- Meta line: 20 text/gemini; charset=utf-8
To get CI/CD working, there's a few moving parts to take care of.
Set up a CI/CD user with appropriate permissions in your AWS account and take a note of the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
Create Github secrets for the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables
Configure the Github Actions execution environment to use those secrets
Configure the Github Actions execution environment to have Go and CDK installed to be able to build and deploy the project
Run build/test commands (e.g. `go test ./...` to run tests)
Configure Github Actions to execute the deployment using `cdk deploy`
There's an extra complication if you're using private Go modules in your project.
For example, if you've got a private repo in your Github organisation (org) that contains the `github.com/org/library` package, and you want to use it from the `github.com/org/api` repo, you have to give Github Actions permission to access that other repository.
At the time of writing, the only way to do this is via a Personal Access Token, which gives anyone with that token access to repositories within any organisation that the user belongs to. Personal Access Tokens are "personal" to a specific Github user, so if I create one and use it for a client, then I leave the organisation, it would be rescinded. They also give access to all of the organisations that you're a member of, which can be a security problem.
To avoid this I recommend that you:
Create a Github user to use for CI tasks within your organisation
Create a personal access token for that CI user
Add the personal access token as a secret called `PERSONAL_ACCESS_TOKEN` in Github
Github Actions can run on various virtual machine configurations, but you can also run your commands inside a Docker container on that machine.
I think this is the best way, because you can ensure that the Docker container contains everything you need to build the software, and you have full control over the versions of dependencies you use.
It's relatively easy to make a Docker image and push it to Github's built-in Docker registry. To reduce problems around authentication against the registry, I try to keep build containers public and open source.
To build up the container that all the CI/CD actions will run in:
Create a (public) repo for the container
Add a Dockerfile
Add a Github Action build process to build the image, and push it to the Github Docker registry
With Go CDK, we need both Go and Node.js. Since Node is more time consuming to setup, I've based the Docker image on `node:latest`, installed the AWS CDK, then installed Go and update the environment to include the `go` executables on the path.
FROM node:latest # Install CDK. RUN npm install -g aws-cdk # Install Go. RUN curl -L -o go1.16.6.linux-amd64.tar.gz https://golang.org/dl/go1.16.6.linux-amd64.tar.gz RUN rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.6.linux-amd64.tar.gz ENV PATH "$PATH:/usr/local/go/bin"
I setup `.github/workflows/deploy.yaml` to build the Dockerfile and push it to the registry whenever the `main` branch is updated. Even though there's a reference to `secrets.GITHUB_TOKEN`, you don't actually have to create that, it's built-in [0].
name: Create and publish Docker image on: push: branches: ['main'] env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }} jobs: build-and-push-image: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - name: Checkout repository uses: actions/checkout@v2 - name: Log in to the Container registry uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Extract metadata (tags, labels) for Docker id: meta uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38 with: images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} - name: Build and push Docker image uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc with: context: . push: true tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }}
One thing to note here is that the tag applied to the Docker image is `main` based on the branch name rather than `latest`.
The Dockerfile above is at https://github.com/a-h/aws-go-cdk-action [1]. You can see information about the Docker image at [2]
The image is pulled from `ghcr.io/a-h/aws-go-cdk-action:main`
docker pull ghcr.io/a-h/aws-go-cdk-action:main
Once you have your build Docker image built and pushed you can set up your CDK project's CI/CD pipeline.
From your CDK project, you'll need to set up a `.github/workflows/deploy.yaml` file to get Github Actions to run the workflow.
Make sure you've added the appropriate secrets:
`CI_USER_PERSONAL_ACCESS_TOKEN` - the personal Github token for the Github user you configured. (This is only required if you want to access private Go modules that are stored in Github.)
`AWS_ACCESS_KEY_ID` - the AWS access key for the AWS CI user.
`AWS_SECRET_ACCESS_KEY` - the AWS secret access key for the AWS CI user.
I'll take the file section by section.
First, the name of the Workflow, and when to trigger. In this case, every push to main will run the workflow.
name: Deploy on: push: branches: - main
Configure a single job called `Test and deploy`. It runs on a `ubuntu-latest` VM, but inside my `aws-go-cdk-action` container.
jobs: deploy: runs-on: ubuntu-latest container: ghcr.io/a-h/aws-go-cdk-action:main name: Test and deploy
I use a lot of DynamoDB in my projects, so they usually contain integration tests that use DynamoDB local for testing. This section runs a DynamoDB local container with the DNS name `dynamodb`, and opens up port 8000 in the container.
services: dynamodb: image: amazon/dynamodb-local ports: - 8000:8000
The job has a number of steps to execute. The first step is to get the code.
steps: - name: Checkout uses: actions/checkout@v2
Next, cache Go modules.
- uses: actions/cache@v2 with: path: | ~/.cache/go-build ~/go/pkg/mod key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }} restore-keys: | ${{ runner.os }}-go-
Pull the modules by running `make get` (I usually put all my CI commands into a Makefile in the root of the project).
The `TOKEN` and the `git config` command are only required if you need to access private Go modules in Github repositories. The command sets up `git` to use the `CI_USER_PERSONAL_ACCESS_TOKEN` to use specific credentials to access Github repositories, enabling the pipeline to access private Go modules.
Inside the `make get` command, `go env -w GOPRIVATE=github.com/your-organisation-name` is also set to bypass public Go module proxies.
There's some useful information on Go and Github actions over at [3]
- name: Get modules env: TOKEN: ${{ secrets.CI_USER_PERSONAL_ACCESS_TOKEN }} run: | git config --global url."https://my-ci-user:${TOKEN}@github.com".insteadOf "https://github.com" make get
The tests access the `dynamodb` container and customise the DynamoDB endpoint to point at the local DynamoDB if the the `DYNAMODB_ENDPOINT` environment variable is set.
- name: Test env: DYNAMODB_ENDPOINT: http://dynamodb:8000 run: make test
If the tests pass, it's time to start the CDK deployment, so the AWS credentials need to be pulled from Github secrets and added to the environment.
- name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: eu-west-1
Finally, the deployment can be executed.
`make deploy` just runs `cdk deploy` in the appropriate directory. CDK is installed in the build container, so there's no special setup required at this point.
- name: CDK deploy run: make deploy
For ease of copy/paste, here's the full config:
name: Deploy on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest container: ghcr.io/a-h/aws-go-cdk-action:main name: Test and deploy services: dynamodb: image: amazon/dynamodb-local ports: - 8000:8000 steps: - name: Checkout uses: actions/checkout@v2 - uses: actions/cache@v2 with: path: | ~/.cache/go-build ~/go/pkg/mod key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }} restore-keys: | ${{ runner.os }}-go- - name: Get modules env: TOKEN: ${{ secrets.CI_USER_PERSONAL_ACCESS_TOKEN }} run: | git config --global url."https://my-ci-user:${TOKEN}@github.com".insteadOf "https://github.com" make get - name: Test env: DYNAMODB_ENDPOINT: http://dynamodb:8000 run: make test - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: eu-west-1 - name: CDK deploy run: make deploy
There's a few things to get right, but using your own Docker image to run your build in simplifies everything a lot.
It's a bit of a pain to get private Go modules running due to Github not having a way to grant Github actions access to specific organisation repositories, but creating a specific user in your Github organisation can make sure you have control of the access, and that your builds don't break when someone leaves the organisation.
In a future post, I'll share some tips on CI/CD user permissions, and how to prevent a common privilege escalation attack vector that I see a lot in AWS CI/CD pipelines.
-- Response ended
-- Page fetched on Sat Apr 27 18:26:03 2024