I’ve been using GCP’s Cloud Run for a handful of projects recently,
including staffeng.com, and have generally been really pleased with it.
Now that I’m familiar with it, I can get all of this working for a new Python project in about
Those triggers build a Dockerfile in the repository and upload it to Google’s container registry
Automatically deploys from the container registry to my Google Run endpoint
Map that Google Run endpoint to a custom domain,
where Google manages the SSL/TLS certificate on my behalf
Once that’s setup, the development workflow is at least as good as anything
I’ve used in a professional setting, in large part because the codebase is
so small, the architecture has so few components (just a single HTTP service),
and I usually don’t care much about managing existing state.
However, there are two things that I find somewhat lacking in the toolkit right now:
There isn’t a good example of running Python tests as part of this pipeline,
even after a good amount of googling around. As long as the build completes,
it’s going to get deployed
The deployment strategy is not robust: it simply replaces existing container with
the new container. If the new container fails healthchecks… who knows… because
it doesn’t even call healthchecks.
For the project I’m spinning up today, I wanted to try to get something a bit better than
those. First, I poked around testing and honestly couldn’t find a particularly good strategy,
ending up with adding a test step into my Dockerfile itself:
This is not particularly elegant, and means that I had to add all my test dependencies,
including pytest into my requirements.txt file, bloating the production container image
a bit, but in the end it does run the tests in the build step and will properly fail the
build if they don’t pass.
To do this right without contaminating the produciton container, I think
you’d need to do something more sophisticated than is possible with just cloudbuild.yaml
and have a genuine multi-step build pipeline with hooks inbetween.
Or maybe it is possible to do what I’m trying to do with cloudbuild.yaml and the
documentation is just a bit opaque, certainly I’ve not dug too deeply in there.
That’s roughly my sense of the state of deployment strategies as well,
if you look through the Google Run rollout documentation,
you have all the tools you need to build a deployment strategy that make sense:
# split 5% of traffic to latest revision
gcloud run deploy --image hello-00005-blue --no-traffic
gcloud run services update-traffic hello-srv --to-revisions LATEST=5
# split 75%/25% traffic between two revisions
gcloud run services update-traffic hello-srv --to-revisions hello2-00005-blue=75,hello2-00001-green=25
# all traffic to latest revision
gcloud alpha run services update-traffic hello-srv --to-latest
Don’t get be wrong, this are fantastic primitives,
and I believe you could hook into Cloud Monitoring
to pull success rates to make control plane decisions about whether to roll forwards,
but you really don’t get much out of the box. This contrasts interestingly with Kubernetes,
which is certainly a bit of a nightmare operationally, but does give you create automatic
validation on new pods and automated rollback (as long as you write useful healthchecks):
my favorite early moment of using Kubernetes was rolling out a bad Nginx config and realizing
that the pod automatically failed to deploy!
Altogether though, Cloud Run is probably by favorite project deployment tool at this point,
and GCP’s suite of tools are quite good for getting a rather good amateur setup going very, very quickly.