owned this note
owned this note
Published
Linked with GitHub
# How and why? Practicalities of the on call rotation at 99
This doc includes:
- the How and Why of being on call at 99
- tools we use
- Suggests steps to take when responding to an alert
## On call rotation :alarm_clock:
Being on call means being available during a set period of time to investigate and fix production incidents for the systems for which your team are responsible. On-call responsibilities extend beyond normal office hours. You will be on call for shifts of 1 week, 24/7.
When you are on call there is an expectation that you will:
- always have your laptop with internet and a usable dev env
- always have your phone/be contactable
- be responsible in your choices of activities
Being on call does NOT mean you are expected to know how to fix all alerts, or even be the first to respond. Only that you do your best, communicate, and escalate sensibly.
## Why? :thinking_face:
At 99designs we follow the DevOps model - where development and operations are done by the same team. Not having a separate ops team means that our engineers are responsible for their code from start to finish. Understanding the broader implications of code you ship, and knowing it might wake you at 3am should influence the quality of your work.
> "you wrote it, you run it"
You are also best placed to understand the current state of the codebases (in theory..).
Fortunately, we have a lot of systems in place to make it more difficult for us to deploy bugs (code review processes, automated tests in CI). When this does happen, we've got monitoring and alerting in place to make sure we can discover these bugs (or anything else that might be going wrong with our app/s) quickly.
## How? :calendar:
There is an on call roster for each engineering team managed by the service [pagerduty](https://support.pagerduty.com/docs/notification-phone-numbers). You will be on call for 1 week (day and night). You are only on call for apps your team is responsible for. There are escalation policies in place meaning you are never solely responsible for being on call.
![](https://i.imgur.com/ULigNY9.png)
You will be phoned, emailed, texted (depending on your settings) and our Slack channel will be pinged. You can set yourself up on pagerduty through the web UI, but there is also an app that is quite handy. Take the time to set this up how you would like. Tip: double check "do not disturb" mode on your phone doesn't block pagerduty, and consider adding the [pager duty contact](https://support.pagerduty.com/docs/notification-phone-numbers) as a "favourite".
If you ever need to swap on call shifts, make sure to chat to your team and your engineering manager to organise this. Use your own best judgment for what qualifies for needing to swap shifts.
### Compensation :moneybag:
You will be paid 10% of your normal hourly salary outside of office hours, for being available and ready to respond to incidents. This compensation applies to first-level on-call only.
You will be paid 200% of your normal hourly salary while responding to incidents. This applies to anyone responding to an incident, including escalations.
To be compensated you must fill out an [incident response form](https://docs.google.com/forms/d/e/1FAIpQLSfeRt1sCeryugb-7ZSaExvZCDpQfouMaJ8FzjUL3Qjodo4xmw/viewform).
## Tools :hammer_and_wrench:
Can you access them, what are they good for?
[Bugsnag](https://app.bugsnag.com/99designs/) Exception monitoring
[Datadog](https://app.datadoghq.com/dashboard) Performance Monitoring
[Papertrail](https://papertrailapp.com/dashboard) Server Logs (forwarded Cloudwatch)
[Wormly](https://www.wormly.com/welcome) Uptime and server monitoring product. Used for checking specific URLs are 200 OK and/or contain a specific string.
[Pagerduty](https://99designs.pagerduty.com/sign_in): On call roster, sends out alerts and escalations
[99cli (a wrapper over aws cli)](https://github.com/99designs/99cli), [aws-vault](https://github.com/99designs/aws-vault), [chamber](https://github.com/segmentio/chamber): Tools for accessing prod servers and secrets
** PAUSE TO CHECK IF EVERYONE CAN ACCESS ALL THESE THINGS :D**
Can you run the following:
- `aws-vault exec platform -- 99cli logs /bastion/access.log`
- `aws-vault exec platform -- chamber list abacus`
It can be useful to have a browser folder of these links so you can rapid fire open them all when investigating.
Make a habit of looking at datadog regularly, it helps to know what the graphs _should_ look like, so you can tell what is irregular!
Similarly, spends some time lurking in your team's ops channel(s). What stuff is just noise, and why? Give yourself the goal of getting to the bottom of an error/alert, even if it's the 1000th time it's fired.
## Responding to an alert :fire:
Although you are not ultimately responsible for _fixing_ the error/bug, you are responsible for the _alert_ once you have acknowledged it. This means you are responsible for coordinating a response and communicating (unless/until you hand over this responsibility).
Alerts might come from:
- Wormly (looks for content on a page// http response e.g. "critical")
- Datadog (graphs of performance over time e.g too many 500s)
- Cloudwatch (aws monitoring e.g. db, healthy host count)
- Manually triggered by support or anyone at all by emailing `emergency@99designs.com`, or through pagerduty UI (something is broken, you've received an alert but you need to escalate to a different team)
Keep calm and...
**Acknowledge**: Step one! Otherwise it will escalate to the level 2 on call. You can do this via slack, the pagerduty app/website, or by responding to the texts/phone calls.
**Triage**: determine the scope of the issue, what codebase, what users, how many users, is it ok-ish or :dumpsterfire:
**Notify**: depending on the scope, communicate with other engineers, #announcements, support, designer forum or end users (noticebar) as you deem appropriate
**Mitigate**: your primary job is to keep the plane flying, and minimise user impact. Do the minimum required to keep our service usable. Secondary to that is troubleshooting an ultimate fix. Sometimes doing _nothing_ is a valid option.
**Escalate**: you are encouraged to seek help and support
**Follow up** With a [(blameless) post mortem meeting](https://codeascraft.com/2012/05/22/blameless-postmortems/), and a write up. Follow up on action items from the meeting
### Investigating :mag_right:
The alert itself will contain some information, start pulling on threads from there. Your next port of call could be the ops/readme of the app and our various monitoring services.
Don't let assumptions cloud your judgment (I merged that PR earlier so it must be that||this thing is always alerting for no reason, it must be fine). Remember that everything happens for a reason. There are great blogs/resources out there on debugging, but ultimately practice makes perfect :)
As you are debugging, it helps to leave comments in slack about what you think the cause might be, what you have tried so far - keep a log of what you are doing. Take screenshots of what graphs you can see and post in slack. This will help in the future, with post mortems, or if someone else jumps in to help out.
I also suggest keeping a doc handy of commands you have found useful along the way so you need not search back through slack history i.e. how to ssh into a prod box, how to filter bastion logs..
## Good reads :book:
- Every codebase should have an ops/readme, definitely read them and update them as you see fit
- Official 99 ops docs [here](https://99designs.atlassian.net/wiki/spaces/DEV/pages/31916207)
- All previous postmortems available on confluence in that space^.
- Dan wrote [some good notes](https://99designs.atlassian.net/wiki/spaces/DEV/pages/939819757/Dan+s+rough+notes+for+being+on+call+for+the+php+schedule)
- Google SRE books: https://landing.google.com/sre/books/
- [Strategies for fixing an incident](https://99designs.atlassian.net/wiki/spaces/DEV/pages/31916229/Strategies+for+fixing+an+incident)
- [Giacomo's notes](https://hackmd.io/Ye3yveHyRGqVR7d9S5iryA) on addressing an alert