## Topic 51 Pragmatic Starter Kit
### Intro
> *Civilization advances by extending the number of important operations we can perform without thinking.*
We can’t afford to go through pages of instructions again and again for some common operation. Ex:
- Build and release procedure
- Testing
- Project paperwork
- ... (any other recurring task on the project)
Should be
- Automatic and repeatable
- Ensuring consistency and repeatability
:::success
**Idea of the Pragmatic Starter Kit**
- Version Control
- Regression Testing
- Full Automation
:::
### Drive with Version Control
> **Use Version Control to Drive Builds, Tests, and Release**
- Build, test, and deployment should be triggered via commits or pushes to version control, and built in a container in the cloud.
- Release to staging or production is specified by using a tag in your version control system.
- True continuous delivery
### Ruthless and Continuous Testing
> **We are <U>driven</U> to find our bugs <U>now</U>**, so we don’t have to endure the shame of others finding our bugs later.
- small nets (unit tests) => catch minnows
- big, coarse nets (integration tests) => catch killer sharks
:::info
**Test Early, Test Often, Test Automatically**
- Start testing as soon as we have code
- A good project may have more test code than production code
- The time it takes to produce this test code is worth the effort.
- Knowing that you’ve passed the test gives you a high degree of confidence that a piece of code is “done.’’
:::
:::info
**Coding Ain’t Done ’Til All the Tests Run**
> **“Test for real”**: the test environment should match the production environment closely.
- **Unit Testing**: The foundation of all the other forms of testing
- **Integration Testing**: With good contracts in place and well tested, any integration issues can be detected easily
- **Validation and Verification**
- The users told you what they wanted, but is it what they need?
- Does it meet the functional requirements?
- Be conscious of end-user access patterns and how they differ from developer test data
- **Performance Testing**
- Performance requirements under real-world conditions. Ex: expected number of users, or connections, or transactions per second
- My own ex: browser zoom-in display
- **Testing the Tests**: Cause the bug deliberately and make sure the test complains.
:::
#### Testing Thoroughly
> How do you know if you have tested the code base thoroughly enough?
:::warning
**Test State Coverage, Not Code Coverage**
What is important is the number of **states** that your program may have. Not the "code coverage"
:::
Ex: A function that takes two integers, each of which can be a number from `0` to `999`:
```typescript
const test(a: number, b: number) {
return a / (a + b);
}
```
Three-line function has 1,000,000 logical states (100 * 100)
- 999,999 states will work correctly
- 1 state will not (when `a + b` equals `0`)
#### Property-Based Testing
> You would need to identify all possible states of the program. Unfortunately, this is a really hard problem.
- Explore how your code handles unexpected states by having a computer generate those states.
- Use property-based testing techniques to generate test data according to the contracts and invariants of the code under test. ([Topic 42, Property-Based Testing](https://hackmd.io/92XL7p1DRkqiLFdKKlTW-A#Topic-42-Property-Based-Testing))
:::danger
:heavy_exclamation_mark: **Find Bugs Once**:heavy_exclamation_mark:
- If a bug slips through the net of existing tests, you need to add a new test to trap it next time.
- Once a human tester finds a bug, it should be the last time a human tester finds that bug.
:::
### Full Automation