# Metrics Discussion (EMS Aug 2020 - Virtual)
Date: August 12, 2020
Facilitator: Sofia Nguy
Notetaker: Keeley Hammond
Zoom Mods: Charles Kerr
Miro link: https://bit.ly/ems-metrics-board
## Intro
Sofia: Let's get started! What information can we gain that's helpful to maintainers? That's a very broad topic - I think we can brainstorm exactly that at first. What information could we measure?
{ Group brainstorms }
{ Brainstorming is very active! }
## Brainstorming
Sofia: Going through the issues - we have a lot of issues, so let's just sort those and make sure they're clearly labelled. We have categories for:
- Performance
- "time to value" cold start to web content visible
- what codepaths are hottest?
- Issue Tracker & PRs
- e.g. do maintainers wait similar times to get work reviewed
- what % chance does an issue have of getting fixed
- what % chance does a PR have of getting merged
- Amount of PRs merged from outside contributors
- do maintainers have similar experiences by the numbers?
- What's Crashing
- Stability churn per feature
- Are we crashing more?
- Stability trends across releases
- CI
- What is the trend of CI run time
- What tests cause CI to fail most often?
- Who's Paying?
- APIs
- high risk components or APIs
- What APIs are being used?
- Discovering which APIs are high risk to change?
- Experimental API usage
- What Electron APIs are used the most? The least?
- How are apps using the main/Node process
- How many apps are following security best practices...or not
- Memory
- Does Electron memory use change over releases for same tests
- Memory leak detection
- baseline/idle memory usage
- What do maintainers find to be the biggest slough?
- Installer Technology
- install & update mechanisms used by apps
- How are apps auto-updating?
- How are apps building/packaging (electron-builder vs forge vs packager
- What kinds of installers do apps create? (e.g., Squirrel, DMG, Snap)
- Dev Experience (Maintainer Pain/Dev Onboarding)
- Time to make a hello world app
- what $-equiv cost is there in adopting Electron?
- What type of applications are not suitable for electron
- where are maintainers burning time?
- Time to have a working clone of electron locally (win/mac)
- App Usage per Electron Version
- what is the total install breakdown across all apps?
- Usage per operating system (Mac/Windows/Linux)
- How many end users of Electron are there?
- Telemetry of common desktop install
- how many app developers
- native modules used - seems to be a pain and can help to migrate to n-api or context awareness collaborative
- where are apps stuck on upgrading?
- How long does it take to update to the next major for each app?
Sofia: Everyone vote on what you think are the most important.
{ Everyone Votes }
Top Three:
1. Memory
2. Dev Experience
3. APIs
Sofia: So even though we voted, that doesn't mean the other metrics _aren't_ important! We'll just go back to them when we have more time.
### Discussion
#### Memory
Jeremy: One thing we could do is run LSAN on Nightly, and it could find memory leaks in Electron when we run out test suites for instance.
Erick: Follow up - we discussed this in NYC, but is there anything that stops us from impelementing that?
Jeremy: Depends on ASAN - I have a PR open to enable ASAN, but it fails our tests and I haven't had the time to fix it. So we already have memory bugs and we need the time to go in and fix them: https://github.com/electron/electron/pull/23570
Shelley: Looks like only one or two tests are failing! So we could put some effort in.
Sam: In my experience, 99% of issues are caused by app code and not by Chrome. It's very easy for us to leak data from JavaScript.
Jeremy: We have been changing some of our APIs to be more friendly and automatically garbage collect - how can we collect metrics about this issue? That would be useful
Anton: I think we should also ask why. Why should we collect this metric?
Jeremy: This is definitely a thing that we can do in CI. It's difficult to think of how this could be effective in collecting third-party apps in the wild. We'd be collecting on the specific test rather than the full app.
Jacob: We could also think about Sentinel. It is running on third-party apps.
Anton: So we're depending on Chromium/Node. Could this generate a lot of nosie?
Jeremy: If we could bisect to a Chrome bump, then we would know it's taking more memory.
Sam: Chrome is pretty good about fixing perf regressions pretty quickly. It's actually a good thing if it generates noise.
Sofia: Awesome. Let's touch on another one.
#### Dev Experience
Sofia: Let's start with - why do we want this?
Jacob: We're often seeking more contributors. But I think we could also look at our current contributors and see where we can give wasted time _back_. If there a regular developer, they have a working build that's quick to sync. If they're once a month, then they might have a lot of friction that keeps them from contributing.
Sofia: Other thoughts?
Jacob: I was thinking more about maintainer's specifically.
Sam: I think "time to hello world" might be misplaced.
Anton: This is for someone who wants to start working on Electron. Go to docs, start on machine, etc.
Numaan: I think we want to bring on a full contributor on Postman, someone who contributes maybe once per month.
Anton: Is CodeSpace something we could use?
Sam: No, it's too small - nowhere near enough space. 20GB, and my checkout of Chrome alone is 30MB.
Felix: It barely handles Electron Fiddle.
Jeremy: We're on metric, let's focus on measuring and not solving.
#### APIS
Sofia: What do we want to measure here?
Sam: We could download built versions of various apps and check on the APIs within each one.
Jeremy: There's some things we might miss like Messenger which doesn't have a direct download option
### Wrap Up
Sofia: I think we need as a follow up on what we want to measure, why we want to measure them, and then how. If we don't know why we want to measure something, we probably shouldn't do it.