April 11, 2020.
I've been using GCP's Cloud Run for a handful of projects recently, including staffeng.com, and have generally been really pleased with it. Now that I'm familiar with it, I can get all of this working for a new Python project in about twenty minutes:
masterbranch is pushed to Github
Dockerfilein the repository and upload it to Google's container registry
Once that's setup, the development workflow is at least as good as anything I've used in a professional setting, in large part because the codebase is so small, the architecture has so few components (just a single HTTP service), and I usually don't care much about managing existing state.
However, there are two things that I find somewhat lacking in the toolkit right now:
For the project I'm spinning up today, I wanted to try to get something a bit better than
those. First, I poked around testing and honestly couldn't find a particularly good strategy,
ending up with adding a test step into my
FROM python:3.7-slim ENV APP_HOME /app WORKDIR $APP_HOME COPY ./src ./ COPY ./tests ./tests COPY ./requirements.txt ./ RUN pip install -r ./requirements.txt RUN PYTHONPATH=$PYTHONPATH:`pwd`/ pytest -v CMD exec gunicorn –bind :$PORT –workers 1 –threads 8 app:app
Specifically the line running the tests is:
RUN PYTHONPATH=$PYTHONPATH:`pwd`/ pytest -v
This is not particularly elegant, and means that I had to add all my test dependencies,
pytest into my
requirements.txt file, bloating the production container image
a bit, but in the end it does run the tests in the build step and will properly fail the
build if they don't pass.
To do this right without contaminating the produciton container, I think
you'd need to do something more sophisticated than is possible with just
and have a genuine multi-step build pipeline with hooks inbetween.
Or maybe it is possible to do what I'm trying to do with
cloudbuild.yaml and the
documentation is just a bit opaque, certainly I've not dug too deeply in there.
That's roughly my sense of the state of deployment strategies as well, if you look through the Google Run rollout documentation, you have all the tools you need to build a deployment strategy that make sense:
# split 5% of traffic to latest revision gcloud run deploy –image hello-00005-blue –no-traffic gcloud run services update-traffic hello-srv –to-revisions LATEST=5 # split 75%/25% traffic between two revisions gcloud run services update-traffic hello-srv –to-revisions hello2-00005-blue=75,hello2-00001-green=25 # all traffic to latest revision gcloud alpha run services update-traffic hello-srv –to-latest
Don't get be wrong, this are fantastic primitives, and I believe you could hook into Cloud Monitoring to pull success rates to make control plane decisions about whether to roll forwards, but you really don't get much out of the box. This contrasts interestingly with Kubernetes, which is certainly a bit of a nightmare operationally, but does give you create automatic validation on new pods and automated rollback (as long as you write useful healthchecks): my favorite early moment of using Kubernetes was rolling out a bad Nginx config and realizing that the pod automatically failed to deploy!
Altogether though, Cloud Run is probably by favorite project deployment tool at this point, and GCP's suite of tools are quite good for getting a rather good amateur setup going very, very quickly.