Michael Alyn Miller

The Opposite of Internet-Scale Deployments


I am designing a new software-as-a-service product and am almost overwhelmed by the number of deployment options available today. Many of these are SaaS products in their own right, and it can be hard to separate “new” from “necessary.”

I bring my own biases to this as well: I have spent the last decade working on a very large service and have to be aware of my own tendency to lean towards familiar patterns that may not be appropriate for this new venture.

For example, you can get away with a single machine for a very long time, especially if you are using a fast, local database like SQLite and have durable, locally-attached storage. Heroku, my go-to PaaS vendor for a number of years, does not offer that feature.

Infrastructure providers like DigitalOcean have a range of affordable Linux VMs, all with fast, local storage, but deployments and operating system upgrades are now your job. I normally use NixOS for repeatable system configuration, on everything from laptops to desktops to cloud servers to routers, but it’s not quite a fire-and-forget process.

Ultimately there were three things that I needed:

  1. Minimal operating system footprint, ideally with some form of auto-upgrades
  2. Container-focused build pipeline
  3. Automatic, Git-based deployments

My goal was to create a system that I could set up and (mostly) ignore, freeing me up to focus on the service itself.

Operating system selection

NixOS has an auto-update mechanism that works quite well, but major upgrades are a manual process that requires a careful review of the release notes. I wanted to see if I could find something more automated for this service. Unlike my NixOS machines, this VM only had to run one or two Docker containers and would require virtually no OS-level configuration.

My search led to Flatcar Container Linux which seemed perfect for what I wanted (from their web page):

A minimal OS image only includes the tools needed to run containers. No package manager, no configuration drift.

Delivering the OS on an immutable filesystem eliminates a whole category of security vulnerabilities.

Automated atomic updates mean you get the latest security updates and open source technologies.

Flatcar has native support for a number of cloud providers and can be installed on a bare metal machine if your provider is not supported (this is great for Hetzner Dedicated Root Servers, for example).

Finally, Flatcar let me “cheat” with the initial deployment: I manually installed Flatcar on the VM and then created a systemd unit to launch our Docker container. Long term it would be good to use a declarative provisioning process, but the initial goal was to get the OS installed and move on to the service itself.

Note that manually installing Flatcar still gets you the automatic updates! And in fact, as I was writing this article, I see that the OS was updated yesterday without any intervention on my part.

Continuous integration with GitHub Actions

The build portion of this process is relatively straightforward, and the ultimate goal is a simple one: publish a Docker image somewhere that our VM can access. Docker Hub and GitHub are the obvious choices. I already use GitHub Actions for building other things, so it made sense for me to stick with GitHub and their GitHub Packages service.

GitHub has a sample GitHub Actions workflow for building and publishing a Docker image in their docs. I have reproduced the sample here (with a few edits, mostly focused around branch names and the mapping of branches to Docker tags):

name: Build & Publish

on:   
  push:
    # Publish `main` as the Docker `latest` tag and `release`
    # as the `prod` tag (the `release` mapping is probably
    # only useful for deployable Docker images).
    branches:
      - main
      - release

    # Publish `v1.2.3` tags as releases (not as useful for
    # deployed services, but left here so that this sample
    # works for all types of Docker images).
    tags:
      - v*

env:
  # TODO: REPLACE THIS WITH YOUR SERVICE OR PACKAGE NAME
  IMAGE_NAME: your_service_name

jobs:
  publish:
    runs-on: ubuntu-latest
    if: ${{ github.event_name == 'push' }}

    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout sources
        uses: actions/checkout@v3

      - name: Build image
        run: docker build . --file Dockerfile --tag $IMAGE_NAME

      - name: Log in to registry
        run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u $ --password-stdin

      - name: Push image
        run: |
          # Set $IMAGE_ID to the full path to the image
          IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME

          # Change all uppercase to lowercase
          IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')

          # Strip git ref prefix from version
          VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')

          # Strip "v" prefix from tag name
          [[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')

          # Convert `main` (branch) to `latest` (tag), and
          # `release` (branch) to `prod` tag.
          [ "$VERSION" == "main" ] && VERSION=latest
          [ "$VERSION" == "release" ] && VERSION=prod

          # Tag and push the image.
          echo IMAGE_ID=$IMAGE_ID
          echo VERSION=$VERSION
          docker tag $IMAGE_NAME $IMAGE_ID:$VERSION
          docker push $IMAGE_ID:$VERSION

Note that this GitHub workflow is all about building and publishing Docker images, it has nothing to do with deployment! In fact you can use this workflow as-is to publish “normal” Docker images to GitHub Packages.

To put that another way, I like the fact that the GitHub side of things is focused on “continuous integration” without any knowledge of what might happen after the container has been published.

Automatic deployments with Watchtower

Watchtower provides the “continuous deployment” part of this workflow: it scans all of the running containers on your machine, figures out which images those containers use, then polls your image registry for new versions of those images.

Watchtower knows about tags, so if you deploy from the release branch (using the prod Docker tag) then Watchtower will only redeploy the service when the prod tag has been updated; the latest tag will be ignored (but could still be used in a pre-production environment using Watchtower).

The best way to run Watchtower is as a container that has access to your Docker socket (which also allows Watchtower to update itself). This can be done manually using a docker run command:

docker run \
    --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
    --mount type=bind,source=/home/core/.docker/config.json,target=/config.json \
    containrrr/watchtower \
        --interval 60

A couple of notes on those command line arguments:

There are a number of other command line options worth reviewing, including a whole section on notifications which you can use to get a Slack message when your deployment has completed.

DIY without a lot of D.I.Y.

I am really happy with this system, especially since the Docker container that houses the service is completely portable between environments; I can run the same container on my dev machine without any changes.

This makes everything easier, and means that all of the testing I do when I develop locally is very likely to “just work” when it gets to production. Similarly, we can easily create pre-production environments using the same tools, and even wire them up with Watchtower for automatic deployments.