Deploy an Adonis app to Google Cloud Platform
This article explains how to deploy an Adonis app from a Github repo to Google Cloud Run and Google Cloud SQL. The app has been containerized as discussed in the Dockerizing Adonis Cookbook. We will configure a CD script to build our app assets, run any pending database migrations and deploy to Cloud Run as soon as changes are pushed to a deploy branch.
If you are familiar with configuring GCP projects you might want to just skim through the “Set up a GCP project” section and skip ahead to the “Set up deployment” section.
Set up a GCP project
To create a new project, tap “Create Project” in your Cloud Resource Manager and choose a nice, short project id like xyz-app
set two local environment variables to help with the rest of the setup:
export PROJECT_ID=xyz-app
export PROJECTNUM=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
Create a gcloud profile and log in
gcloud config configurations create xyz
gcloud config set project $PROJECT_ID
gcloud config set account happy-hacker@example.com
gcloud auth login
gcloud config configurations activate xyz
Enable some APIs
- Cloud Run Admin
- Cloud SQL
- Cloud SQL Admin
- Compute Engine
- Cloud Build
- Secret Manager
- Cloud Source Repositories
gcloud services list --enabled
gcloud services enable run.googleapis.com
gcloud services enable sql-component.googleapis.com
gcloud services enable sqladmin.googleapis.com
gcloud services enable compute.googleapis.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable secretmanager.googleapis.com
gcloud services enable sourcerepo.googleapis.com
Create a SQL database
Using the SQL dashboard :
- Add an instance with a memorable name like
xyz
- Add a database (e.g.
production
) - Add a user with password under “Built-in Authentication” (a long password with simple characters will work best here)
- Allow the Cloud Build service to access Cloud SQL:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$PROJECTNUM@cloudbuild.gserviceaccount.com \
--role roles/cloudsql.client
5. White list your IP address under SQL > Connections > Networking > Authorised Networks
6. Optionally import a data dump with a local SQL client
Add secrets
Add all your environment secrets as follows:
printf "r3dac73d" | gcloud secrets create PG_PASSWORD --data-file=- --replication-policy=user-managed --locations=europe-west1
Authorise the cumpute and cloud build service accounts to access the app secrets:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$PROJECTNUM-compute@developer.gserviceaccount.com --role=roles/secretmanager.secretAccessor
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$PROJECTNUM@cloudbuild.gserviceaccount.com --role=roles/secretmanager.secretAccessor
Create a Cloud Run Service
In the Cloud Run dashboard:
- tap “Create Service”
- Specify port: 8080, Container command empty, container arguments empty
- Select “Continuously deploy new revisions from a source repository”
- tap “Set up with Cloud Build”
- Select the deploy branch and under Build Type, select Dockerfile
- Under the “Variables and Secrets” tab add the values from your production .env file
HOST is 0.0.0.0, do not specify the PORT - On the “Connections” tab, tap “Add Connection” and link to our configured SQL instance
- On the “Security” tab, select “Compute Engine default service account” as the service account to run under
When you create this first revision it is very likely that your project deployment will fail. We will now proceed to fix this by configuring a CD script to deploy our app correctly.
Set up deployment
The previous step should have created a Cloud Build trigger with an inline build script which we can use as a starting point for our custom build script which we will add to our repo:
- Go to the project trigger manager where you should see a freshly created trigger with a build configuration indicated as “In-line” in the trigger list.
- Click to edit the trigger and under the Configuration section, make sure the Type is selected as “Cloud Build configuration file (YAML or JSON)”
- Tap “Open Editor” and copy the yaml configuration to a file called
cloudbuild.yaml
in the root of your project code. - Cancel the editor and switch the Location to “Repository”. This should trigger a warning that you will loose the inline configuration permanently — which is okay since we made a copy.
We are now ready to make the necessary changes to the cloudbuild spec:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
# Push built image to Google Artifact Registry
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
# Migrate
- name: gcr.io/google-appengine/exec-wrapper
entrypoint: bash
args:
[
'-c',
'/buildstep/execute.sh -i $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA -e PORT=3333 -e HOST=0.0.0.0 -e NODE_ENV=production -e DRIVE_DISK=local -e SESSION_DRIVER=cookie -e CACHE_VIEWS=false -e DB_CONNECTION=pg -e PG_HOST=/cloudsql/$PROJECT_ID:$_DEPLOY_REGION:xyz -e PG_PORT=5432 -e PG_USER=postgres -e PG_DB_NAME=production -e PG_PASSWORD=$$PG_PASSWORD -e APP_KEY=$$APP_KEY -s $PROJECT_ID:$_DEPLOY_REGION:xyz -- node ace migration:run --force',
]
id: Migrate
secretEnv: ['PG_PASSWORD', 'APP_KEY']
# Deploy to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
# Store images in Google Artifact Registry
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_LABELS: gcb-trigger-id=23ee5897-5e45-44bf-bd48-fbbe76a2f543
_TRIGGER_ID: 23ee5897-5e45-44bf-bd48-fbbe76a2f543
_DEPLOY_REGION: europe-west1
_GCR_HOSTNAME: eu.gcr.io
_PLATFORM: managed
_SERVICE_NAME: production
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- production
availableSecrets:
secretManager:
- env: 'PG_PASSWORD'
versionName: projects/$PROJECT_ID/secrets/PG_PASSWORD/versions/1
- env: 'APP_KEY'
versionName: projects/$PROJECT_ID/secrets/APP_KEY/versions/1
Our Dokerfile might look something like this:
<code>ARG NODE_IMAGE=node:16.13.1-alpine
FROM $NODE_IMAGE AS base
RUN apk --no-cache add dumb-init
RUN mkdir -p /home/node/app && chown node:node /home/node/app
WORKDIR /home/node/app
USER node
RUN mkdir tmp
FROM base AS dependencies
COPY --chown=node:node ./package*.json ./
RUN npm ci
COPY --chown=node:node . .
FROM dependencies AS build
RUN node ace build --production
RUN node ace ssr:build
FROM base AS production
ENV NODE_ENV=production
ENV PORT=$PORT
ENV HOST=0.0.0.0
COPY --chown=node:node ./package*.json ./
RUN npm ci --production
COPY --chown=node:node --from=build /home/node/app/build/ .
RUN mkdir inertia
COPY --chown=node:node --from=build /home/node/app/inertia/ ./inertia/
EXPOSE $PORT
CMD [ "dumb-init", "node", "server.js" ]
</code>
Conclusion
It might seem like a very long recipe, but as you can imagine, once the gcloud CLI has been set up a lot of the process can be scripted. In a full production app you might want to add another cloudbuild step to run your tests against a test database and you might want to hook into other GCP services like Cloud Storage and Redis. Let us know how it goes!
Posted on