SteveNguyen109
· 11 mo. ago·edited 11 mo. ago
Just pull the Directus' Docker image from Docker Hub and deploy it directly to Google Cloud Run with your own configs. Here is my Google Cloud Build config file to automate the deployment process:
```steps:
- name: 'docker'
args:
- 'pull'
- 'directus/directus:latest'
- name: 'docker'
args:
- 'tag'
- 'directus/directus:latest'
- '${_REGION_NAME}-docker.pkg.dev/$PROJECT_ID/${_REPO_NAME}/${_SERVICE_NAME}'
- name: 'docker'
args:
- 'push'
- '${_REGION_NAME}-docker.pkg.dev/$PROJECT_ID/${_REPO_NAME}/${_SERVICE_NAME}'
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args:
- "-c"
- '
gcloud run deploy $_SERVICE_NAME
--image $_REGION_NAME-docker.pkg.dev/$PROJECT_ID/$_REPO_NAME/$_SERVICE_NAME
--port 8055
--region $_REGION_NAME
--platform managed
--allow-unauthenticated
--set-env-vars "PUBLIC_URL=..."
--set-env-vars "KEY=keyId"
--set-env-vars "SECRET=secretId"
--set-env-vars "DB_CLIENT=pg"
--set-env-vars "DB_HOST=$$DB_HOST"
--set-env-vars "DB_PORT=5432"
--set-env-vars "DB_DATABASE=directus"
--set-env-vars "DB_USER=postgres"
--set-env-vars "DB_PASSWORD=$$DB_PASSWORD"
--set-env-vars "STORAGE_LOCATIONS=s3"
--set-env-vars "STORAGE_S3_DRIVER=s3"
--set-env-vars "STORAGE_S3_KEY=$$S3_KEY"
--set-env-vars "STORAGE_S3_SECRET=$$S3_SECRET"
--set-env-vars "STORAGE_S3_BUCKET=yourBucketName"
--set-env-vars "STORAGE_S3_REGION=us-west-2"
'
secretEnv:
- 'DB_HOST'
- 'DB_PASSWORD'
- 'S3_KEY'
- 'S3_SECRET'
substitutions:
_REGION_NAME: 'us-west1'
_REPO_NAME: 'container-images'
_SERVICE_NAME: 'directus-cloud-run'
images:
- '${_REGION_NAME}-docker.pkg.dev/$PROJECT_ID/${_REPO_NAME}/${_SERVICE_NAME}'
# Secrets in the `args` field must be specified using the environment variable prefixed with $$.
availableSecrets:
secretManager:
- versionName: 'projects/projectId/secrets/directus_aws_rds_endpoint/versions/latest'
env: 'DB_HOST'
- versionName: 'projects/projectId/secrets/directus_aws_rds_password/versions/latest'
env: 'DB_PASSWORD'
- versionName: 'projects/projectId/secrets/directus_aws_s3_key/versions/latest'
env: 'S3_KEY'
- versionName: 'projects/projectId/secrets/directus_aws_s3_secret/versions/latest'
env: 'S3_SECRET'
```
The cloudbuild.yaml file above simply pull the Directus' Docker image and push it to the Google Cloud Artifact Registry, and then deploy it to Cloud Run. The vars prefixed with $$ in the Bash script are secrets from Google Cloud Secrets Manager that are evaluated at build time. Cloud Run can scale your number of containers down to zero when there is no request to your Directus' APIs and reduce cold start time with Startup CPU Boost. More things to read in the Google Cloud docs: build secrets, build substitution vars, Cloud Run env vars.
The downside of Directus is its config.js file doesn't allow asynchronous logics with async/ await to pull secrets from external cloud service at runtime, so I have to pull the secrets at build time and assign them to env vars. U cannot module.exports a JavaScript Promise or add top-level await because Directus only supports CommonJS Modules at the moment.
After reading the Strapi's codebase, I decided not to use it due to its extremely poor TypeScript typings. The codebase is mostly written in vanilla JavaScript.
Contentful and Sanity are both prohibitively expensive for SLA in API uptime. So I stay far away from them.