---
title: react-northstar perf tooling
tags: performance
---
# react-northstar perf tooling
*This document describes current state of things as of Apr 3, 2020.*
## Perf
### Northstar
- Separate perf examples in docsite
http://localhost:8080/maximize/chat-duplicate-messages-perf/false
- Using [React Profiler API](https://reactjs.org/docs/profiler.html), measure time for the first mount
- Run pupeteer to render the example full screen
- Do it 100 times and compute average -> render time in **milliseconds**
You can run the test locally from command line:
```
yarn perf --filter=*Chat*
```
| Example | min | avg | median | max | renderComponent.min | renderComponent.avg | renderComponent.median | renderComponent.max | components |
| ------------------------------ | ------ | ------ | ------ | ------ | ------------------- | ------------------- | ---------------------- | ------------------- | ---------- |
| ChatWithPopover.perf.tsx | 466.95 | 482.66 | 482.66 | 498.37 | 351.28 | 361.9 | 361.9 | 372.53 | 841 |
| ChatDuplicateMessages.perf.tsx | 267.28 | 296.22 | 296.22 | 325.16 | 173.47 | 193.46 | 193.46 | 213.46 | 1101 |
| ChatMinimal.perf.tsx | 0.7 | 1.98 | 1.98 | 3.27 | 0.43 | 1.3 | 1.3 | 2.16 | 1 |
You can debug the test directly in browser:
```
yarn gulp perf:serve
```
CI runs that as part of build pipeline, stores results to DB in Azure.
Docsite renders charts (black line):

#### Problems
- Only initial mount. No rerender, no unmount.
- Number not stable (depends on OS, machine perf, load...).
- You can see a trend after a week
- You cannot use it as a PR gate
### Flamegrill
From Fabric, similar idea, with small differences:
- Disables some optimizations in v8 in pupeteer to make results more stable.
- Measures in **ticks** instead of milliseconds:
- to make results more stable,
- to make the numbers unit-less.
- Instead of running a test 100 times, renders it 100 times side by side in a single page.
- To detect regressions it does not compare milliseconds or ticks but sampled call-stacks.
https://github.com/microsoft/fluentui/pull/12451#issuecomment-604959518
It does not tell if the new result is better or worse, it just detects it is different.
#### Problems
- Ticks are not more stable (see the blue line in the chart above).
- Rendering 100 exactly same examples side by side is not representative (1st render is shaved off by 99 100% cache hits).
- We are not used to take it seriously.
### Compare Fabric to Stardust
> "Give me 20% perf improvement over Fabric7 and I will sell it." Markus
- Uses [comparable examples](https://github.com/microsoft/fluentui/tree/f9ca45d012d2e77eea728e19651b144452562004/packages/fluentui/perf-test/stories), measures perf using flamegrill and compares ticks.
https://github.com/microsoft/fluentui/pull/12451#issuecomment-604959518
#### Problems
- Compares ticks
- Not sure the examples are representative/meaningful.
#### Next steps:
- have compound examples ("whole app")
- have centralized perf page in docsite
## Bundle size
Previously we took component source file (Chat.tsx), bundled that and used the filesize as a result ([more details](/7bVgDQxkTiSFUIvgyAkJCQ)).
Problem: It is an unthemed component. We want to measure themed component, ideally with meaningful settings
Solution: We are adding dedicated docsite examples - shows the real cost of a component with a theme, reveals problems with tree-un-shakable theme and icons.
http://localhost:8080/components/button/definition

```
yarn stats:build
```
#### Next steps:
- let it run for some time
- check the results
- if it works as expected
- cover more (all) components
- add PR gates
- save bundle analyzer output to build artifacts
## Memory impact
Flamegrill outputs some other metrics:
```json
"metrics" : {
"Timestamp" : 230.707841,
"Documents" : 3,
"Frames" : 1,
"JSEventListeners" : 5009,
"Nodes" : 5037,
"LayoutCount" : 2,
"RecalcStyleCount" : 2,
"LayoutDuration" : 0.009137,
"RecalcStyleDuration" : 0.04362,
"ScriptDuration" : 0.815423,
"TaskDuration" : 1.022775,
"JSHeapUsedSize" : 46376520,
"JSHeapTotalSize" : 67473408
}
```
Not sure how useful these are, but we are storing them for the perf examples.

In this sprint I am adding them to the docsite charts.
#### Next steps:
- let it run for some time
- check the results
- re-evaluate [Shift's PR](https://github.com/microsoft/fluent-ui-react/pull/2166) - hooked heap allocation, tagging all Fluent-allocated memory.