This post is about how to cheaply spin up a documentation server that sits behind an authorization proxy. This will be analogous to Github Pages for a private repository. However, this method does not require a subscription to Github Pro or a domain name. (A domain name will be assigned for us.)

Specifically, we’ll use mkdocs to generate the static site, oauth-proxy to prevent unauthorized access and Google Cloud Run to host it for practically nothing. We’ll also use Terraform to create the cloud resources.

Creating the static site

Mkdocs is a documentation site generator and a lightweight alternative to Sphinx. To begin, create a mkdocs.yml file in the root of the repository.

# mkdocs.yml
site_name: foo
repo_url: https://github.com/jeremyadamsfisher/foo
site_description: An example project that needs private documentation
site_author: Jeremy Fisher
nav:
  - Home: index.md
theme: { "name": "material" } # 👈 this is optional, but recommended

We can also create an example page that we see on first visiting the site.

<!--  docs/index.md -->

# Foo

This is a documentation server that is private to our team. We'll make sure that only individuals on the whitelist can access it!

These two files are all you need to build a site with mkdocs. We can preview it locally with:

pip install mkdocs mkdocs-material \
&& mkdocs serve

Implementing the authorization proxy

Let continue to build out the local implementation and add an authorization layer.

First, we need an access list so the proxy knows who to block and who to let through. This can just be a newline-delimited list of emails. Let’s create it:

( cat << EOF
user1@gmail.com
user2@adamsfisher.me
EOF
) > docs/acl.txt

Then, we need the build of the site. Run:

mkdocs build

This produces a static site in the site directory. Now, oauth-proxy is a Golang binary, so there are a few distribution options that are super easy if we want to run it locally. I will show how to use Docker:

Follow these instructions to get a client ID and secret.

docker run --rm -ti -v site:site -v acl.txt:/acl.txt \
    bitnami/oauth2-proxy \
    -- \
    --upstream=file://site  \
    --http-address=0.0.0.0:4180 \
    --authenticated-emails-file=acl.txt \
    --cookie-secret $(python -c 'import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())') \
    --client-id <GET FROM INSTRUCTIONS> \
    --client-secret <GET FROM INSTRUCTIONS>

Now, go to localhost:4180 and we should now have to log in before accessing the documentation site.

Deploying to Google Cloud Run

Now that this is working, we need to translate it to Cloud Run. This will be a dance: the website and the authorization proxy program should live on a Docker image, but the configuration for the proxy should not to avoid leaking secrets. The secret configuration will derive from Google Secret Manager and the Cloud Run configuration itself.

Writing the docker image

So, we’ll start by writing the docker image. We can use a two-stage build: one to build the site, another to serve it.

# Build documentation static site

FROM python:3.9-slim-buster AS builder
RUN pip install --upgrade pip
RUN pip install mkdocs mkdocs-material
COPY mkdocs.yml .
COPY docs docs
RUN mkdocs build


# Put static site behind auth proxy

FROM bitnami/oauth2-proxy
COPY --from=builder /code/site .
COPY docs/acl.txt .
CMD [ "--upstream=file://site", "--http-address=0.0.0.0:4180", "--authenticated-emails-file=acl.txt"]

This should be straightforward for anyone with Docker experiance. To push it to the cloud, we need a private registry.

First, set up Terraform with Google Cloud.

Then, we can create an registry:

# docker.tf
provider "google-beta" {
  project = "your-gcp-project"
  region  = "us-central1"
  zone    = "us-central1-c"
}

resource "google_artifact_registry_repository" "foo" {
  provider      = google-beta
  location      = "us-central1"
  repository_id = "foo"
  description   = "foo docker images"
  format        = "DOCKER"
}

Run this to create the repository. Then, the docker image path should be something like: us-central1-docker.pkg.dev/your-gcp-project/foo/docs. So, push it to the cloud with:

IMG=us-central1-docker.pkg.dev/your-gcp-project/foo/docs docker build -t $IMG \
&& docker push $IMG

It may be neccesary to add the docker credentials:

gcloud auth configure-docker \
    us-central1-docker.pkg.dev

Keeping a secret

We’ll now start to populate the secret manager with secrets.

Then, create a secret like so:

// secrets.tf
resource "google_secret_manager_secret" "foo_client_secret" {
  secret_id = "foo_client_secret"
  replication {
    user_managed {
      replicas {
        location = "us-east1"
      }
    }
  }
}


data "google_secret_manager_secret_version" "foo_client_secret" {
  secret = google_secret_manager_secret.foo_client_secret.name
}


resource "google_secret_manager_secret" "foo_client_id" {
  secret_id = "foo_client_id"
  replication {
    user_managed {
      replicas {
        location = "us-east1"
      }
    }
  }
}


data "google_secret_manager_secret_version" "foo_client_id" {
  secret = google_secret_manager_secret.foo_client_id.name
}

Run Terraform. This should fail because there are no secrets. That’s because we haven’t added them yet. Go to the secrets console and manually paste in the client ID and secret from earlier. Then, run it again and it should work.

Pulling it together

Now that we have the configuration as a managed secret, we simply need to deploy the cloud run.

# cloudrun.tf
resource "random_string" "foo_cookie_secret" {
  length           = 32
  override_special = "-_"
}

module "foo-docs" {
  source   = "garbetjie/cloud-run/google"
  version  = "~> 2"
  name     = "foo-docs"
  image    = "us-central1-docker.pkg.dev/your-gcp-project/foo/docs"
  location = "us-central1"
  env = [
    { key = "OAUTH2_PROXY_CLIENT_SECRET", value = nonsensitive(data.google_secret_manager_secret_version.foo_client_secret.secret_data) },
    { key = "OAUTH2_PROXY_CLIENT_ID", value = nonsensitive(data.google_secret_manager_secret_version.foo_client_id.secret_data) },
    { key = "OAUTH2_PROXY_COOKIE_SECRET", value = random_string.foo_cookie_secret.result },
  ]
  allow_public_access = true
  port                = 4180
}

Conclusion

There you have it. Go to the cloud run dashboard to find the URL that was assigned to it. It should require a login before allowing access to the static site.

Updating the site requires pushing a new docker image and redeploying the cloud function:

gcloud run deploy foo-docs \
    --image us-central1-docker.pkg.dev/your-gcp-project/foo/docs \
    --region us-central1