---
tags: development process, team
---
# OLM Playbook
## Team Processes
See [Scrum Process](https://hackmd.io/772hY7S0TPCSyvV6DKnIMg) document
## Downstream
### Sync: Howto?
Documentation regarding syncing the upstream repositories to the downstream can be found in the [downstream olm repository](https://github.com/openshift/operator-framework-olm/blob/master/docs/downstream-ci.md)
### OLM E2E Tests are Flaky, I cannot merge!
#### Context
You submit a PR to downstream olm. The olm e2e tests fail in different ways over multiple runs. For instance, in the first CI run `foo` fails, in the second `foo` succeeds, but `bar` fails AND we are on a time crunch.
#### Action
Manually verify that the test cases pass:
1. If you haven't already, add the `cluster-bot` Slack App:
1. Scroll down on left panel (where all your channels and private messages are) until you see `Apps`
2. Press the plus button
3. Search for `cluster-bot`
4. Add it
2. Go to the `cluster-bot` App
3. Launch create a cluster with your PR: `launch openshift/operator-framework-olm#<number>`
4. Once the deployment completes (in approx. 30 mins) you will get notified and get given the credentials and kubeconfig file
5. Download the kubeconfig file
6. Execute the failing test(s) 10 times and make sure it is passing:
```bash=
for i in $(seq 1 10); do
KUBECONFIG=<path to downloaded kubeconfig> TEST="test name" make e2e/olm >> test_run.txt
done
cat test_run.txt | grep "SUCCESS" | wc -l # should output 10
```
If the tests pass, paste this information on the PR. The command you executed and the outputs.
Ask an approver to override the failing e2e CI job.
#### What if I cannot get any cluster-bot clusters?
1. Escalate to management
2. Try to get data on the stability of the test(s) on crc see [testing downstream on CRC](https://github.com/openshift/operator-framework-olm/blob/master/docs/local-testing-with-crc.md) (may not work on Mac)
3. Together with management and an approver (TL, Staff Eng, etc.) make a risk evaluation and override if the risk is tenable
### Other CI jobs are Flaky, I cannot merge!
If there CI jobs *other than* ours failing repeatedly, e.g. e2e-gcp, or e2e-upgrade, try to look at the logs and understand what is happening.
Get in touch with [#forum-testplatform](https://coreos.slack.com/archives/CBN38N3MW) to try to get more help. If the merge is urgent, escalate
the issue to management immediately.
If the *console* tests are failing, you can try to reproduce them locally. See instructions [here](https://github.com/openshift/operator-framework-olm/blob/master/docs/downstream-ci.md#running-console-test-locally).