---
tags: infrastructure, engineering
---
# Deploying to Master/Staging
### Steps to deploy to master
- Merge staging into master. DO NOT squash/rebase or change the commit history.
- Changing commit will cause merge conflicts between staging and master.
- Once the code is in master manually apply the database migrations. The command will be `nx run protocol-api:migrateDeploy`. Make sure the database url locally is pointing to prod and make sure that you delete it from your .env immediately after
- We tried automating all of this, but the issue was that whitelisting github actions ip was non obvious. Once that is solved this can be automated.
- One other approach to automatically run the migrations is to have them run prior to starting the api.
### Steps to deploy to staging
- Merge a branch into staging. Squash the branch.
- Once the code is in staging manually apply the database migrations. The command will be `nx run protocol-api:migrateDeploy`. Make sure the database url locally is pointing to staging and make sure that you delete it from your .env immediately after
- We tried automating all of this, but the issue was that whitelisting github actions ip was non obvious. Once that is solved this can be automated.
- One other approach to automatically run the migrations is to have them run prior to starting the api.
### Rolling back
IF there are problems that crop up during the migration, it's relatively easy to roll back and solve a bad migration. Using the following command:
`yarn prisma migrate resolve --rolled-back "schema name" --schema apps/protocol-api/src/prisma/schema.prisma`
will undo a migration, putting the db in a state ready to continue with migrations.
## Additional Notes
These were notes I took during Keating's demo
- Running migrations for `staging` and `prod`
- [https://hackmd.io/@Govrn/rykd-9Bzn?type=view](https://hackmd.io/@Govrn/rykd-9Bzn?type=view)
- Process Overview (Order of Operations):
- Merge into `staging`
- Run the migrations for each environment
- Steps are slightly different for staging since we squash and merge
- Only need to run this when there is staging migrataions, otherwise we’ere fine with merging into staging / prod
- Once code is merged into staging, manually apply the staging migration
- Make sure it’s poointing to the staging db, and then delete from the .env
- Can run the migration ************right before************ the container starts in the protocol-api
- **Steps to automate:**
- Open up the Dockerfile for the protocol-api
- Run this first `nx run protocol-api:migrateDeploy`
- Can run commands via the `RUN`
- Reason we don’t want the command in the Dockerfile is that this’ll run when the ****************container builds —**************** we want it to run when the container starts running
- Create a script and run it to automate
- ******************Steps to do manually:******************
- Merge into staging, squash branch
- Once in staging, pull down locally
- Manually apply the db migration ************************************while pointing to the db environment************************************
- Open the .env and point to the staging database
- Check the head of the staging branch to make sure it matches what’s on staging
- Could get in trouble if applying the wrong migration to the db
- As long as it’s the same branch we won’t have any issues
- Can typically apply the migrations ******before****** it is deployed
- Can go to DO, select a user for the admin, and then select the db and copy this to local
- Copy the connection string with the ‘show password’ to get the full db URL to use
- How would we have a duplicate user address in staging?
- Which database did we select for the connection string?
- Which is the one that we use?
- `govrn`
- Troubleshooting:
- Roll back to the first migration we were going to apply in the PR
- Re-run the most recent migration needed
- When deploying a new job:
- Everything will deploy automatically, but make sure it’s running properly and make sure the .env vars are correct and behaving as expected in the new prod environment
- We’d have transient logs in DO from the jobs, but we can also add this to Datadog
- Datadog should be picking up *************too many logs*************