--- robots: noindex, nofollow tags: pitch --- For instructions on shaping a project see here: [Shaping a Project](/kX02SXVbS6KzMOQd56i6Cg) # Tracking Items | Item | Due Date | Status | | -------- | -------- | -------- | | Levi follow up with perf learnings, to get tracing into v7 from Teams invesgitation | EOW 6/5 | Tracking | # Performance Baselining ### Problem ### Appetite ### Solution ### Risks (Rabbit holes) ### Out of scope (No-gos) ### Notes from @oc-__NrETrSk4IeUVGlhMw Miro [5/28 2:40 PM] Miroslav Stastny Hey Paul, as you are writing a project proposal for compound perf tests, let me share v0 experience with that. Feel free to ignore if it's not relevant for you (smile) In Stardust, we started with single component perf tests - for example we render a single Button in isolation and measure its perf. You can see a chart for that in our public docsite, at the end of this page: ![](https://i.imgur.com/CQAzHgf.png) The problem with this is that this example takes ~3 milliseconds to render. That's too small number to tell the real time and noise apart. In order to address this problem, Fabric renders the same 1000 buttons and measures a single time for the whole page. But there is another problem here - it does not represent a real consumer scenario - consumer barely renders 1000 identical buttons (where 999 goes completely from cache). So what worked best for us so far is whenever we were investigating a perf issue in Teams, we recreated similar UI as a perf example in the library. Here are some examples: Chat - https://fluentsite.z22.web.core.windows.net/maximize/chat-with-popover-perf/false Calling screen toolbar - https://fluentsite.z22.web.core.windows.net/maximize/custom-toolbar-perf/false Calling screen roster - https://fluentsite.z22.web.core.windows.net/maximize/tree-with-60-list-items-perf/false. This gave us three different things: we were able to investigate performance without any services/network/rest of the application impact if we optimized the example, we got a working example to show to Teams we now have a test to verify we are not regressing. So what I would propose is to add more of these examples representing a part of a real UI, measure performance of these examples as part of PR build and chart the numbers. Whenever we discussed compound perf examples we also used Semantic Layout examples as a good example of how the compound examples can look like.