OneSheep OneSheep

Deploy an Adonis app to Google Cloud Platform

This article explains how to deploy an Adonis app from a Github repo to Google Cloud Run and Google Cloud SQL. The app has been containerized as discussed in the Dockerizing Adonis Cookbook. We will configure a CD script to build our app assets, run any pending database migrations and deploy to Cloud Run as soon as changes are pushed to a deploy branch.

If you are familiar with configuring GCP projects you might want to just skim through the Set up a GCP project” section and skip ahead to the Set up deployment” section.

Set up a GCP project

To create a new project, tap Create Project” in your Cloud Resource Manager and choose a nice, short project id like xyz-app

set two local environment variables to help with the rest of the setup:

export PROJECT_ID=xyz-app
export PROJECTNUM=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')

Create a gcloud profile and log in

gcloud config configurations create xyz
gcloud config set project $PROJECT_ID
gcloud config set account
gcloud auth login
gcloud config configurations activate xyz

Enable some APIs

  • Cloud Run Admin
  • Cloud SQL
  • Cloud SQL Admin
  • Compute Engine
  • Cloud Build
  • Secret Manager
  • Cloud Source Repositories
gcloud services list --enabled
gcloud services enable
gcloud services enable
gcloud services enable
gcloud services enable
gcloud services enable
gcloud services enable
gcloud services enable

Create a SQL database

Using the SQL dashboard :

  1. Add an instance with a memorable name like xyz
  2. Add a database (e.g. production)
  3. Add a user with password under Built-in Authentication” (a long password with simple characters will work best here)
  4. Allow the Cloud Build service to access Cloud SQL:
gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:$ \
    --role roles/cloudsql.client

5. White list your IP address under SQL > Connections > Networking > Authorised Networks

6. Optionally import a data dump with a local SQL client

Add secrets

Add all your environment secrets as follows:

printf "r3dac73d" | gcloud secrets create PG_PASSWORD --data-file=- --replication-policy=user-managed --locations=europe-west1

Authorise the cumpute and cloud build service accounts to access the app secrets:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member=serviceAccount:$  --role=roles/secretmanager.secretAccessor
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$ --role=roles/secretmanager.secretAccessor

Create a Cloud Run Service

In the Cloud Run dashboard:

  1. tap Create Service”
  2. Specify port: 8080, Container command empty, container arguments empty
  3. Select Continuously deploy new revisions from a source repository”
  4. tap Set up with Cloud Build”
  5. Select the deploy branch and under Build Type, select Dockerfile
  6. Under the Variables and Secrets” tab add the values from your production .env file
    HOST is, do not specify the PORT
  7. On the Connections” tab, tap Add Connection” and link to our configured SQL instance
  8. On the Security” tab, select Compute Engine default service account” as the service account to run under

When you create this first revision it is very likely that your project deployment will fail. We will now proceed to fix this by configuring a CD script to deploy our app correctly.

Set up deployment

The previous step should have created a Cloud Build trigger with an inline build script which we can use as a starting point for our custom build script which we will add to our repo:

  1. Go to the project trigger manager where you should see a freshly created trigger with a build configuration indicated as In-line” in the trigger list.
  2. Click to edit the trigger and under the Configuration section, make sure the Type is selected as Cloud Build configuration file (YAML or JSON)” 
  3. Tap Open Editor” and copy the yaml configuration to a file called cloudbuild.yaml in the root of your project code.
  4. Cancel the editor and switch the Location to Repository”. This should trigger a warning that you will loose the inline configuration permanently — which is okay since we made a copy.

We are now ready to make the necessary changes to the cloudbuild spec:

  - name:
      - build
      - '--no-cache'
      - '-t'
      - .
      - '-f'
      - Dockerfile
    id: Build

  # Push built image to Google Artifact Registry
  - name:
      - push
    id: Push

  # Migrate
  - name:
    entrypoint: bash
        '/buildstep/ -i $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA -e PORT=3333 -e HOST= -e NODE_ENV=production -e DRIVE_DISK=local -e SESSION_DRIVER=cookie -e CACHE_VIEWS=false -e DB_CONNECTION=pg -e PG_HOST=/cloudsql/$PROJECT_ID:$_DEPLOY_REGION:xyz -e PG_PORT=5432 -e PG_USER=postgres -e PG_DB_NAME=production -e PG_PASSWORD=$$PG_PASSWORD -e APP_KEY=$$APP_KEY -s $PROJECT_ID:$_DEPLOY_REGION:xyz -- node ace migration:run --force',
    id: Migrate
    secretEnv: ['PG_PASSWORD', 'APP_KEY']

  # Deploy to Cloud Run
  - name: ''
      - run
      - services
      - update
      - $_SERVICE_NAME
      - '--platform=managed'
      - >-
      - '--region=$_DEPLOY_REGION'
      - '--quiet'
    id: Deploy
    entrypoint: gcloud

# Store images in Google Artifact Registry

  substitutionOption: ALLOW_LOOSE

  _LABELS: gcb-trigger-id=23ee5897-5e45-44bf-bd48-fbbe76a2f543
  _TRIGGER_ID: 23ee5897-5e45-44bf-bd48-fbbe76a2f543
  _DEPLOY_REGION: europe-west1
  _PLATFORM: managed
  _SERVICE_NAME: production

  - gcp-cloud-build-deploy-cloud-run
  - gcp-cloud-build-deploy-cloud-run-managed
  - production

    - env: 'PG_PASSWORD'
      versionName: projects/$PROJECT_ID/secrets/PG_PASSWORD/versions/1
    - env: 'APP_KEY'
      versionName: projects/$PROJECT_ID/secrets/APP_KEY/versions/1 

Our Dokerfile might look something like this:

<code>ARG NODE_IMAGE=node:16.13.1-alpine

RUN apk --no-cache add dumb-init
RUN mkdir -p /home/node/app && chown node:node /home/node/app
WORKDIR /home/node/app
USER node
RUN mkdir tmp

FROM base AS dependencies
COPY --chown=node:node ./package*.json ./
RUN npm ci
COPY --chown=node:node . .

FROM dependencies AS build
RUN node ace build --production
RUN node ace ssr:build

FROM base AS production
ENV NODE_ENV=production
COPY --chown=node:node ./package*.json ./
RUN npm ci --production
COPY --chown=node:node --from=build /home/node/app/build/ .
RUN mkdir inertia
COPY --chown=node:node --from=build /home/node/app/inertia/ ./inertia/
CMD [ "dumb-init", "node", "server.js" ]


It might seem like a very long recipe, but as you can imagine, once the gcloud CLI has been set up a lot of the process can be scripted. In a full production app you might want to add another cloudbuild step to run your tests against a test database and you might want to hook into other GCP services like Cloud Storage and Redis. Let us know how it goes!

Posted on Nov 16, 2022 by Jannie Theunissen

Back to all posts