owned this note
owned this note
Published
Linked with GitHub
# Node.js collab summit 2025 Paris, Day 1
- Zoom links: [Day 1](https://zoom.us/j/94250159683), [Day 2](https://zoom.us/j/93791639410)
- Notes: [Day 1](https://hackmd.io/@NI182l8fTYSk-nr0xebGLA/Hku7_NvTJl/edit), [Day 2](https://hackmd.io/@NI182l8fTYSk-nr0xebGLA/BkMHFVvpkl/edit)
- [Detailed information about the summit]( https://github.com/openjs-foundation/summit/issues/433)
- [Schedule]( https://docs.google.com/spreadsheets/d/1_FNNbKYcc032NIUqeJ3SZP3PM71aYMuv9D5JbwaS9Iw/edit)
- [Detailed guide on attending in person]( https://hackmd.io/@NI182l8fTYSk-nr0xebGLA/Sy0Ze4_p1e)
- [Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/)
- Slack channel: [#collab-summit-paris-2025](https://openjs-foundation.slack.com/archives/C087WL02FEY)
## Intro
Bryan English walked through the Datadog etiquette rules.
### CI Incident
Matteo Collina and Richard Lau run the room through a vulnerability discovered/reported on March 21st. https://nodejs.org/en/blog/vulnerability/march-2025-ci-incident
Test machines were compromised via a TOCTOU issue issue via the `request-ci` label. Jenkins CI did not perform a check if there were "spurious" commits added after the job was started.
**Explicit**: end users were not impacted
Design choice: The CI and Test infrastructure has been intentionally kept separate to avoid cross-contamination risks like this.
Antoine du Hamel: only PRs that have been approved will work with the request-ci label for now
Work ongoing to figure out how to run PRs on unapproved PRs
Matteo: no help from GitHub support for this use case. Only option: build processing infra, and/or get a dump and figure it out ourselves. A lot of work.
CI will run on the latest approved commit. Could probably relax this for collaborators to run on arbitrary pushes. Collabs already have greater access than this to do harm anyway.
Thanks to Robin and folks from Linux Foundation that helped deal with this. Blog post in the coming weeks. This is ~half public, so refrain from broadcasting for now.
Antoine: if you use a similar label-based workflow for your CI - considering checking it for this issue :)
<!-- redacted -->
Richard: Rebuilding machines is done via Ansible - it requires manual action in, e.g. Digital Ocean, to recreate, and then 5-10 minutes of Ansible to complete. It takes time. Will result in longer build queues.
Alex: Any ROI in making the CI runner lighter to spin up?
Paolo: We need to support platforms where Docker would not be an option
Richard: Some scope to pursue isolation to de-risk, but requires investment
Antoine: Is the trade-off always caching/speed vs. security?
Richard: we use ccache to cache compile artefacts. It is possible to alter subsequent builds this way. Some platforms take 6-8 hours, and getting worse with successive V8 versions. On 2-core machines that makes builds pretty untenable. Work is ongoing (permissions issues?) to share a cache between workers for macOS. This would unblock throwaway agents. Our Jenkins setup dates back to 2014, predated GitHub actions. High-level problems: (1) people's time & (2) minimising disruption to other collaborators.
<!-- 15 mins left about CI before Lightning talks -->
### CI Reliability
Joyee: Sharing the CI reliability report. github.com/nodejs/reliability
Richard: it looks at the 100 most recent jobs. That will reach back before CI lockdown, and include some test jobs post-lockdown.
Joyee: NCU CI. Queries 100 latest jobs and tries to find patterns using some regular expressions. It generates a report [link](#). It opens an Issue. Every day a new Issue. Joyee implemented in 2018, "primitive" parsing of `console.log`s. Shows high-level pass/fail count. Collaborators probably know full-well that CI is flaky, and these reports reflect that. If people had time to look into it, this should point where to look. `GIT_FAILURE` relatively rare. If you're not a Build WG member, it usually requires asking someone in the Build Slack channel to get someone to intervene, as it will tend to require permissions. You _might_ be able to provide advice. CCTest and JSTest failue and below tend to be where you might be able to lend a hand. Something affecting e.g. 14 unrelated PRs is an example of a flake. The report shows the platforms where this appeared which might help narrow it down. Joyee shows an example.
Jacob: So it tracks how many times it fails a PR, does it track retries?
Joyee: It's based solely on PR IDs.
Jacob: Would it be more useful to reflect if something subsequently checks on a retry.
Thomas: It sounds like it would capture failures even in a PR that ultimately gets a pass.
Joyee: Only a test that fails at least 2 independent PRs will show up in the report. If you break your own tests, that's your own problem :) The CI shows the first/last PRs that the failure was in. This can help to identify when it began. The `REBASE_ONTO` SHA helps guide the way sometimes. Timeouts are hard :( When you are at your wits end with CI, check out the report! It may help you decide if the thing that's making your CI red is a flake or not. The report we're looking at right now isn't too bad, relatvely few root causes.
Luke: What's a typical time frame for last 100 jobs?
Joyee: Can be weeks. This week is obviously exceptional. Around 3 weeks typically. We keep past reports back to ~2020. Reports were moved to files in response to hitting Issue limits. The oldest reports were in Issues (currently 45 pages).
<!-- end of designated CI time -->
<!-- now should be lightning talks, but no submissions -->
Alex: Do we have any _statistical_ analysis on failing tests? Other than the count
Joyee: No, if anyone wants to add sophistication, PRs welcome :D There is a "stress test job". An argument passed to the Python test runner to see if a failure can be repro'd under pressure. If you want to isolate something to know if it's a flake, that's where this shines. But that is not used as part of this flake-detection.
Matteo: When adding a new test, let's make sure it's not flaky? This represents more work.
Joyee: Build WG might be able to say better, but this would take a lot of resources. Also need to be aware of which platform.
Matteo: We had some universally flaky tests. Some are platform specific. Was thinking something GitHub-actionsy, "if a new file is added, run it 100 times". Try to stop the problem at source.
Stephen: are flakes typically coming from new tests being added, or modifications to existing functionality? Suspect the latter is a big problem.
Joyee: IME inspector/watch (for example) are just flakier than others. Maybe we can invest our stress testing in those directions.
Alex: Do we have a budget for compute, that could be applied to randomly run tests on different platforms to try and discover failures?
Joyee: We do have daily CI on `main` branch. But nobody has the bandwidth to look at it. Daily `main` has in the past failed for several months without intervention. For example, right now! Since at least Feb 27th.
Richard: yes it has been red for a long time. Due to the pointer compression build. Volunteers wanted! The IBM people are aware of the IBM failure. Internet tests tend to only run on the dailies. Richard has flagged this up in pointer compression discussions.
Thomas: could the pointer compression build be cut?
Richard: it could be, but people requested it to be added.
Richard: it is possible to mark things as flaky so they explicit show up as flaky.
### Experimental WASM Modules
Chengzhong: https://github.com/nodejs/node/pull/57038 from Guy Bedford
PR from Guy Bedford that will unflag `--experimental-wasm-modules`. It will allow importing WebAssmbly modules without a flag and without boilerplate.
This PR was originally set up for unflagging in Node.js 24, but it takes more time to coordinate with browsers so we missed the 24 deadline. Marco marked this PR as `semver-major`, so it should only be unflagged in 25.
Could we remove the `semver-major` label, making it available sooner on 24? This can make WebAssembly build-tools integrateing with ESM support more smoothly (as build-tools adopt changes slowly, so only-land-on-v25 delays the whole feature). It could later be back-ported to other LTS versions. Compared this to previous precedents like unflagging TypeScript which was not semver-major and also in the modules space.
Stephen: what was the reason for suggesting it be major?
Marco: at the time, Node 24 was not out and it relied on something in Node 24. So I consider it fine to be semver-minor for Node 24.
Joyee: the correct label to use might have been "dont land on xyz", and transitively the v24 need would be "propagated" that way.
Chengzhong: it takes a bit more time to coordinate with browsers. If we can land in a semver minor that would be promising!
Matteo: tag it with "don't land on 23"
Chengzhong: backporting would be good for build tools like emscripten to migrate to
Marco: it relies on V8 version behaviour?
Chengzhong: not this change?
Marco: Guy said it relied on source phase imports?
Chengzhong: that's WASM CG specification, not V8. Guy was saying the proposal is still in standard process but we would like to unflag the feature bit with a flag to disable with `--no-experimental-wasm-support`. There is precedent for experimental things being on by support.
Nicolo: there is some ongoing discussion about when WASM exports globals
Chengzhong: this is another PR for WebAssembly global support: https://github.com/nodejs/node/pull/57525
<!-- end of lightning talk -->
### CppgcMixin with Joyee
CppgcMixin allows you to host objects in the V8 GC to be managed by it. We make assumptions in our wrappers about how lifetimes are managed, need an abstraction layer to make sure it's tied in correctly. Go from Base object to environment via env. This is a shim. We have two classes already migrated. `ContextifyScript` underpins vm.script. Used to block a lot of people upgrading from Node 16(?). Should make management more straightforward. V8 primitives to manage links. You inherit from the `CPPGC_MIXIN` using macro to construct the base class. Usually involves changing a few macros. This makes things `TracedReference`s. There is a `Trace` method to allow V8 to do marking. We have two classes already migrated, `ContextifyScript` and `ContextifyContext`. Joyee has drawn a fantastical ASCII diagram in the source. These classes don't have external memories so relatively easy to migrate. We do not track them in heap snapshots. As we expand the amount of Base objects that we need to migrate over, we need primitives to check external memory or information will be lost in snapshot. Only used for memory diagnostics, but is a regression. e.g. Don't want OpenSSL wrapper with a GB growing vector of memory to not appear in heap snapshot.
Working on a way to track the external memory. First step was to upstream a Cppgc API. V8 not accepting as not useful for Blink. Other route is adding internal helpers to make it possible to track like how we used to track base objects. When we generate a heap snaoshot go through the list and annotate the nodes to show external memory.
There's a WIP PR: [link](#). It's stalled because CI was not working :sweat_smile:. `inline CppgcWrapperList* cppgc_wrapper_list` in `node_realm.h`.
Joyee shows memory in devtools, image from PR #51017. The migration is relatively simple on top of the helper updates.
PR #51017:`crypto: use cppgc to manage Hash`.
Weirdly it speeds up hashing performance benchmarks, because it speeds up garbage collection visibility.
Hoping to merge the helper PR when CI is back online.
:clap:
Matteo: will this new way of managing the objects in C++ improve GC times generally?
Joyee: usage-pattern dependent. So far haven't seen any _regression_ in microbenchmarks, and theoretically it makes sense that perf would be improved. These are traced handles.
Matteo: would it be possible to use this was JSStream?
Joyee: because lifetime is tied to some libuv handle it needs careful handling with traced roots. Not look into that yet, more complex, should be possible.
<!-- impromptu lunch break - the overwhelming sound of rumbling stomachs -->
## Node.js Follow-ups and quick updates
Facilitator: @JakobJingleheimer
Scheduled time: 13:30 CET
https://github.com/openjs-foundation/summit/issues/438
<!-- we back, resuming at 13:33 CET -->
Jakob takes the stand.
The purpose is to give updates on things discussed at the last Collab Summit.
<!-- TODO: Link to the doc plz -->
### Userland Migrations
It got started. The first thing was to correct .ts specifiers. It depends on Joyee's `registerHooks`, so it has been waiting for that to get backported to an LTS line. That happened a few hours ago :tada: so when that's done we can release that for general consumption. Have been working with ? Team on automating releases for that.
We did get interest from a community author about another TS transform: transform from non-erasable to erasable syntax, to ease the burden of migrating to the Node-no-flag-supported set.
### DEI
Not where we thought it all way. Started efforts here to have some sort of mentorship program for people that want to get more involved from under-represented groups. Partnered with Code Bar(?) in UK and started an Amsterdam chapter which has had one meetup so far. Expect another in a few weeks.
### Data Analysis
We have a bunch of data from users/collabs that we wanted to expand the analysis on.
We have a new contributor Zach who will hopefully be here tomorrow. They have helped with previous and will help with the upcoming survey.
### Bluesky Account
We now have a Bluesky account following the Tooling discussion!
### Other
Module discussions ongoing. Something(?) shipped after the summit.
Discussion about Node configuration files is shipping in 22.9(?)
Migration to ESM. Some e18e projects working on migrating ecosystem packages to ESM. We could collaborate with them to document their experience.
Syncifying the module loader. Ran into some failing tests, and discovered bug(s). We will syncify the default path(?). The module loader "sped up" enough to expose the pre-existing bug.
Intend to fully syncify the ESM loader(?). Marco opened a PR to discuss marking CJS as legacy, what would need to happen to make ESM a first-class citizen in Node.js. Discussion ongoing, outcome still unclear.
Funding session: Michael unable to join remotely. Next10 team did talk about it.
Robin: board discussed it. Will follow up in next monthly session. No decisions made.
Diagnostics session(?)
Documentation session: there's a late-stage PR on something(?)
## Node.js collaborator-track mentorship programme
Facilitator: @JakobJingleheimer
Scheduled time: 14:00 CET
https://github.com/openjs-foundation/summit/issues/445
Jakob retains the stand.
We have some collaborators who have been unofficially mentoring contributors. There are also other contributors looking for mentors. The path to finding mentors is not reliable. We previously discussed low-key ways to improve. Some opt-in way for collaborators to indicate that they're available to mentor and at what capacity. Then a contributor looking for mentorship can be "nominated" for it, in the same way that collaborators are nominated. There are guidelines for the level/amount of interaction. Those mentees would likely get nominated for collaborator status ultimately, having been guided through the process for a period of time. Opening up the floor to feedback/thoughts/concerns/fears.
Joyee: We did have nodejs/mentorship - what happened to it?
Robin: we ran a request for mentors/mentees but struggled to find mentors.
Ruben: Yagiz and I are informally mentoring people
Robin: Lack of mentors resulted in disappointing the mentees.
Tierney: One of the problems was that mentors had too much on their plates. Supply and demand issue. Has a tendency to leave a bad taste
Matteo: I was a mentor in that program. Without wanting to offend an individual, the level of the effort and ability of the mentee was insufficient to be a collaborator. They appeared to treat it as an opportunity for free education. The mentee needs to be someone with sufficient conviction for it to be worth the effort for all parties. I have mentored people outside of Node.js that were easier to grasp; Node is relatively hard. Example: we have plenty of Undici contributors who would not necessarily make good Node.js contributors, consider: `node:http`, which is more difficult to onboard contributors to than undici.
Stephen: mostly agree, but this is true of software in general. How do we identify the promising candidates?
Matteo: Indeed, I am more enthusiastic towards mentorships where the mentee has done/attempted 3-4 contributions to Node
Jakob: and has shown some "skin in the game", not just looking for free education
Bryan: but for someone to contribute that much without mentorship - how?
Stephen: there are people who are familiar with one thing but not necessarily with another, like Rafael worked with me on async hooks, so now there's more than one person who knows how this works.
Jakob: need something that shows _promise_. Probably more than _just_ documentation fixes.
Tierney: sounds like we're describing outreach/onboarding - how do we "level up" people. People tend to contribute to something orbiting core to begin with, and then work their way in. Figuring out how to enable/support that transition is maybe the missing piece? Looking at it this way perhaps side-steps the problem of advertising "mentorship" as a blank check from the outside.
Stephen: maybe the terms need clarification. "Mentor" could be interpreted as mentoring how Node works, as opposed to onboarding someone into core contribution.
Robin: 2021 was the last activity on (the archived) nodejs/mentorship repo. It's been a while.
Jakob: sounds like positive sentiment, but we need to work on the how some more. Next steps to collate feedback from today and update the proposal in Next10 and re-present, hopefully before the next Summit.
Ruben: there was a recco about nominating people (to be Collaborators) after mentorship. I have a high bar for nominating someone for collaborator. Willingness to learn and progress are good, but even after mentorship of a few months I don't feel sure that they would be at the necessary level. The document seems to indicate that might be a conclusion. Don't want to set false expectations about the outcome of mentorship. This current serialisation could create an expectation. Could this create a harmful mentorship outcome if they don't meet that goal.
Tierney: we want to ensure that the verbiage lends itself to correct expectation setting.
Shelly: it's possible to state that something is the purpose of a program but not guarantee that that outcome will be achieved always
Jakob: if things seems to be plateauing there needs to be a way of communicating that - missing atm
Pablo: Verbiage that says: ~"graduating does not mean collaborator". The additional expectations to be a collaborator would ideally be better specified to allow objective evaluation
Jakob: those criteria should exist even outside of this program
Thomas: maybe the mentors should not always expect that the outcome is that the mentee would become a collaborator - and be ready to accept that and perhaps communicate it to the mentee
Pablo: anyone allowed to apply as mentee, but given few mentor resources, there ought to be a minimum bar. Oh, it's nomination based, ok just need explicit criteria
Stephen: Maybe two explicit tracks: (1) I want to grow my knowledge to be a better contributor, (2) I want to gain trust to become a collaborator. I've done (1) with no expectation that people were aiming to be collaborators.
Jakob: more casual mentoring?
Stephen: mainly just communicating a clearer delineation of expectations. Personally I've mentored contributors about async hooks, just to help more people learn about it
Jakob: that sounds like something outside the program, and less like an explicit assignment
Stephen: to avoid confusion about different paths it helps for them to be defined together
Pablo: for (1) it makes sense to pair mentees with mentors with particular expertise in a more casual way
Stephen: we have knowledge silos in Node core, if people make it very explicit that they're available to talk about a silo'd topic - it would be good to have a way to advertise _that_ kind of "mentorship".
Jakob: there is tribal knowledge within the "tribe" about where the expertise is. Maybe as part of the nomination request mentees should state their areas of interest.
Tierney: already mentioned that there are people mentoring outside of a system - would be interested to know if they perceived value in the increase in formality.
Jakob: Matteo/Ruben/Jakob/Rafael(Twitch)/Grace-Hopper-Conf-mentees are doing some form of mentoring - I feel that this would help _me_, but trying to see if that opinion is shared by others in this group. Are we all aiming at the same thing
Ruben: I agree that people are in favour of there being mentoring. The controversial part seems to me to be about mentioning collaborator status as a potential outcome. The objective thing is that we're trying to facilitate increased contribution to core. Collaborator status should follow naturally
Pablo: agree, that is the subtlety that is causing friction
Stephen: even as a core-contributor, it would be valuable to have even a basis list of people from sub-systems that are available to mentor on those. This is valuable for someone without the tribal connections
Ruben: even if I wasn't open to "mentorship" I would be happy to share my "knowledge island".
Thomas: like CODEOWNERS for mentorship/knowledge
Pablo: not _committing_ to anything long-term. Open to "20 minutes" here and there
Thomas: passive benefit: it shows the bus factor of tribal knowledge on particularly silo'd areas
Ruben: \<concrete proposal of where this mapping could live that I missed\> that involved the word "link"
## Next-10 Improving Collaborator Experience
Facilitator: @sheplu
Scheduled time: 14:30 CET
https://github.com/openjs-foundation/summit/issues/441
<!-- starting on time after a 10 minute break -->
Jean gives an introduction to how to register the pain points in a board, using EasyRetro, ranging from ragequit to smaller things
[The EasyRetro Board](https://easyretro.io/publicboard/uUlQgng4WngJPXjxLc0drc2BLEp1/01ab00f2-6ed2-4b3a-a651-869199cf7a39)
<!-- retro'ing commences -->
Jean: reading out huge painpoints, doing some merging
V8 - we're kind of blocked with our next major atm :eyes:
<!-- we did voting -->
Ruben: in the interest of time (minimal) I suspect we wont come up with a magical fix for flaky CI
Jean: the deadlock bug
many people in concert: "what is the deadlock bug?"
It is: https://github.com/nodejs/node/issues/54918
Volunteer to explain the deadlock bug plz
Brave warrior Stephen attempted to fix it but has not yet succeeded. Attempt at typing up Stephen's description:
> We delegate tasks to worker in a pool
> There's a queue for sending messages
> causes other side to wait while sending tasks
> If one of the workers triggers GC at a bad time (the main thread) it will lock the main thread.
> At the same time a worker might send a signal to block and wait
> Result: bidirectional deadlock that threads never wake up from
Reproducing takes 150,000 runs sometimes, so it is hard to reproduce reliably. Any time a worker thread stops this might happen
Darshan: some discussion from January, Ben was interested to do consulting for TSC?
Joyee: what's more critical, security, CI or this?
Need money plz
### Extreme Concensus Seeking
Jakob: What dos extreme consensus seeking mean?
Paulo: People stop engaging/burn-out/surrender in the consensus seeking process. Extreme democracy that leads people to exhaustion. Consideration: discussions around ditching corepack. It took a year for various reasons. That time was not in giving corepack time to respond to things. We lost collabs/TSC members. This repeated again on collaborator nomination. We can't afford the loss of people! As individuals we should be clearer that we object (-1) and don't do "-0.5" soft objections. If you don't want to be a concrete blocker, don't block.
Marco: Agree. Voting is hard. When we arrive at a vote someone is usually already exhausted from the conversation. It's seen as a last resort, when it should be something that gives people a voice and allows them to express opinion. Some folks leverage social media as a way to rally a crowd. I think making voting more available would improve collaboration.
Reminder: raise hands to speak
James: our review process going back to beginning of foundation has tried to be geared towards people with explicit reasons to say "no", and not that the author has to convince the individual to say "yes". Don't block e.g. because you don't understand it yet, or because you have some other notion on how it could be implemented. Have a concrete reason. This is the original intent
Ruben: Agree overall that voting is something we do too late. We're too frustrate already today when we agree to vote. Can point to earlier stage in the discussion when we could have initiated voting. We don't invite people to de-escalate from TSC-side either. Think we could have done better and earlier. Things lingered with corepack - this always creates frustration. Idea: one TSC member is going to be active in looking in to the topic and bringing in people with opinions to drive topic forward until a specific date. Then trigger vote at that point.
Paolo: We pick a point and cut to the vote at that point. This allows(?) people to stop engaging and wait for the vote if announced ahead of time(?). Feel that people can object to things if they're not going the right way, but don't do it as a subtle source of friction. Be prompt to withdraw block when contrary evidence is provided. Leaving objections is unacceptable
Tierney: Don't permit blocking on not understanding. Allow people some time to figure it out, but don't have it be an indefinite blocker.
Jakob: Clarity around who is dissenting, rather than have it appear as a cacophony
Jean: when you get involved and are faced with 100+ pages. You wade through the history and follow all the peak and troughs. Should we have something more formatted like an RFC/ADR? When voting you should list your justifications.
Paulo: Block on non-understanding is implicitly mis-trust in the contributor. You can ask for explanation from the author. The Guide could/should say that colaborators are expected to provide explanations.
Stephen: We should have explicit rules for objections that are votable. No "I don't like it"
Tierney: This is not the first time we have had it as a suggestion that people should summarise things. I recall it going back to 2018?
Jacob: Maybe we should allow people who are subejct matter experts to say, I am the expert of this and please come to me for explanation (?)
James: when the PR would otherwise land with existing approvals there should be a clear reason for that.
Ruben: I have asked after blocks in the past and successfully dismissed them, this is a legit way of handling them. Remember this is a capability. Have not heard any dissent towards early voting, let's give it a try. Let's give AI a try for summarisation?
Tierney: make sure we at least tag people to summarize it in case the AI gets it wrong
Thomas: I put my hand down because it was about the AI summarisation
Paulo: Be clear that once objection process starts that it is close-ended, it means that the discussion can't go on forever. Nuance: an expert in an area not understanding might be a flag worthy of objection.
Case in point: Matteo has a good track record of vibes-based blocks being good
Next steps: emoji votes, AI summaries
### Too many open PRs, 500+
Jean: what about an automatic staleness closing bot?
Jacob: We might have it for Issues but not for PRs
Any objection?
James: PRs will be marked stale but not automatically closed
Ruben: One thing about stale-PRs. Maybe 1/3 of them are a valuable contribution, just with some minor hurdles to them finishing. Closing them is a bad experience for the contributor. We could instead try to go in and help move things to the finish line. This could be a good tie-in with mentorship (Jacob likes idea)
Joyee: recently resurrected PR about certificate store issue. Pinged the originator, are they still interested? 3 weeks silence? Pick it up. Had a good time. Don't perceive a problem with still-open PRs(?)
Stephen: noticed that a bunch of code-and-learns from last summit didn't make it
Thomas: there are 4 open from the Code and Learn in November [link](https://github.com/nodejs/node/pulls?q=is%3Apr++label%3Acode-and-learn+)
Bryan: during code and learn there were some comments on the PRs that the contributions weren't worthwhile, not a good look to impressionistic new contributors
James: "these changes were too small" - turning away new contribs. We tried to overhaul the mindset that no change was too small for new contribs.
Jacob: not being able to backport seems legit
James: yes but new contributors deserve more context
Jacob: people having objections to typo fixes?
Bryan: we will get comments saying "what's going on" because they don't realise it's a code-and-learn
Jacob: are they sending duplicates or what?
Tierney: some collaborators don't realise the scope of code-and-learn and to apply a different filter
Jacob: people might be missing the label because it's subtle on mobile or invisible via email
Thomas: make the fact that it's a code-and-learn contribution unmissable - add boilerplate to the top of the PR description (people running the workshop would communicate this boilerplate)
### Hard to find information about niche components
Joyee: `CONTRIBUTING.md` may not be being advertised enough. Patak the vite maintainer was commenting that it was not clear where to start with Node contribution. Added something to the navbar on the website to remedy this. Maybe we're missing ToC of technical documents inside the project - even I struggle to find things sometimes, and that's with me knowing the keywords to look for. Sitemap maybe?
Jean: What if I want to know about snapshots? Something pointing to main resources, not making people guess
Jacob: _almost_ logged this as a frustration: things can only be found by those that know where it is. Magicians are able to send links with ease with answers to questions, despite you trying to find that yourself. Maybe more markdown isn't the answer
Joyee: speaking of AI - could Copilot/ChatGPT assist with this? If these things are documented already then could this be the answer
Thomas/Jean: both had good experiences feeding in some internal documentation into GPTs and being able to query
Jacob: how could we provide this to others to avoid people spinning up microcosms of AIs
Marco: on our website we have a search engine that is AI-backed. It has an index of the files. We could use something similar that has indexed all the docs. Not sure if we fed it CONTRIBUTING but we could
:shipit:
Jean: could be a fun project. :large_green_circle:
Marco: could open an Issue on the Website repo(?)
### On the issue of some people being very loud
Jean: Would setting a deadline help?
Paolo: Yes, but it would also create a problem if some people want to stall the PR
### New Collaborator Items
Jean: Rafael doing good work with live streams to provide visibility to prospective contributors. We could also use the afforementioned stale-PR thing that might provide a way there.
Thmoas: technical limitation - people who are not collaborators cannot push to existing PRs
Jacob: they can ask other collaborators. If the author does not respond for a while it's fine to do it.
<!-- t-shirt distribution time -->
## Bring AsyncLocalStorage to the Web
https://github.com/openjs-foundation/summit/issues/446
Facilitator: @legendecas
Scheduled time: 16:00 CET
Chengzhong: I work on (maintain?) OpenTelemetry API for JS. We're planning to bring AsyncLocalStorage to ECMAScript. (it is on the screen atm). Analogous APIs exist in a variety of other languages. Other Observability tools depend on AsyncLocalStorage. Given our hosts, let's take a look at DataDog's example ;)
Does it work on the web? No
Frameworks need to instruct users to compile into pre-ES2017 to patch Promise to update context; or provide non-standard custom compiler to transform async/await with extra semantics.
Infeasible for Observability libraries, such as APM (Application Performance Monitors). Can't ask people to transpile back (8 years!) or ask users to use non-standard compilers.
Having the API universally available in ECMAScript will ensure compatible behaviour between different runtimes.
Node.js security model differs from the web: Node.js assumes all script is trusted; not so in the browser where the browser distrusts website code that itself blocks cross-site scripting.
Need to convince browsers that its worth their efforts in implementing the API - that we will use the APIs
Lots of Web APIs need to be specified with "deterministic contexts"
We have split a new proposal out: "Disposable Support" - based on a Stage 3 proposal: Explicit resource management.
Bryan: what's an example of a web API which should _not_ propagate context?
Chengzhong: `setTimeout` (and others) should propagate. For _not_: e.g. performance observers.
Nicolo: It's not just about propagation/not. We need to be able to distinguish when we should propagate less
Matteo: problem I've mentioned before: the current in-standardisation spec does not have an equivalent of `enterWith`. It is the "escape hatch". Some APIs/functions/user-patterns cannot be wrapped in a function. In those cases `enterWith` taking the current scope is the saviour. With Diposable this may work but it's in a different spec.
Chengzhong: This disposable bit is split out from the main proposal
Matteo: my concern is that without the `enterWith` escape hatch there is the risk that AsyncLocalStorage progresses and the escape hatch does not and stagnates. Then Node.js is left holding something less than what we have and can't easily be adopted. There are a lot of people using async hooks directly. (Otherwise extremely good work!)
Chengzhong: ...dropped
Stephen: using is not a 1-1 replacement for enterWith. It lacks top scope mutation. In ESM it's inconsistent.
Chengzhong: the syntax is intended to reduce the change to be misused like `enterWith` in event listeners. It is possible to invoke the symbol method manually, in which case it is identical to `enterWith`. I am confused by the saying that the `using` declaration is different from `enterWith` in general semantics.
Stephen: There was asks that the symbol methods must be used with `using` declaration.
Nicolo: ..it's not there yet
Chengzhong: it's important for OpenTelemetry to have a way to extend with the `using` declaration.
Andreu: piggybacking on the previous question: do we know how much code relies on `enterWith` propagating across modules using docs scope. Even if there is this escape hatch, do we know if it would break users?
Stephen: there are a bunch of things on screen: cls-hooked, dd-trace, nest-cls, @mikro-orm/core, lots of weekly downloads
James: bear in mind: does it introduce any new security concerns - `enterWith` was raised as one of those. It allows current frame to be modified by something not visible outside the current execution scope. Need to balance what is needed against what gives it greatest chance of committee success.
Andreu: One other diff between `enterWith` and Disposable - calling async/await on something(?) - hope that would not be the case with Disposable. This was already a diff proposal by Stephen. Do we know that this will not break ALS users?
- About propagation, the idea is that if it schedules a callback then it should propagate.
Events - can of worms, maybe we can choose to propagate in a subset of events for which people agree would be useful.
Stephen: on security aspect James mentioned. Part of the security model of ALS you have a variable/handle to the thing contained. By the action of sharing you are granting permission to access what is within. Have consider splitting it, some languages have reader/writer sides of channels. A thing to consider, perhaps only the ability to read from a store and not write. Having access at all it arguably already the gate. Generally don't see a problem on a security angle, it's on the person that gives away the handle
James: yes and no
Stephen: the thing at the start of an AL function - wont get in to flow-through semantics. I've encountered a few issues before with users doing `enterWith` at start of ALS storing a task representing it, something leaks out. Confused when they try to do something again after an `await` and get the older value. Afforementioned flow-through semantics. Some confuse that they get it out at all, when using Promises (not a/a).
James: re: splitting reading/mutation - personally that would solve it from the discussions I've had.
Andreu: you can solve by giving a callback that something - but ack that having something in the language could help normalise/clarify. Will think about it. Re: Stephen's point(s) - users expecting it to behave the same way before/after - but with flow through it would not let you get that context from a synchronous caller after the invocation, it would be after you `await` it.
Stephen: users are epxecting consistency, functions as container. As if writing to a global that is only available to a narrow scope. Make assumptions about things being available in subsequent execution. Expecting values to be contained.
Nicolo: want to go back to Matteo's point about being impossible to migrate if features missing. Do you think Node could build ALS on top of the proposal but with Node bits added. Gaming through the worst-case scenario.
Matteo: no, that's the problem. Depending on how it is implemented, if there is an escape hatch like `enterWith` in some V8 API, then it could be fine. If the box is closed and can't be extended: no. The things we mentioned earlier rely on the ability to maintain the current scope, and they do not appear to be migrating over. We could leave the old API behind forever. We want a long-term path. Consider it possible with Disposable.
James: that is happening. ALS right now, following updates from Stephen last year, is using a mechanism in V8 for propagating context. That is the same mechanism the new API would use. in Theory we could keep ALS in place using the exact same mechanism and things would just work together. `enterWith` and exit still work as-is. The discussion is whether it will be a JS-exposed API, not a bearing on whether V8 continues to expose the API. Long-term it should be possible to share between ALS and ALC to share the underlying model.
Andreu: The V8 behaviour ALS is built on to: we hope to build that on top of ALC. It only allows to store one value at present. (some acryonym I can't catch). Once we implement Disposable proposal the behaviour might change. Let's talk about how we do this in future.
James: repeating: this V8 API only stores a single value, so we use it to store a Map, and each key is an ALS variable. The change being looked at is that V8 might expose a Key-Value store directly. Then we wouldn't need to store a map, V8 would. Materially this only changes the impl.
Andreu: you could store a map in cdede? Better for Node.js to directly use underlying machinery but under the hood.
Stephen: in terms of reimplementing ALS on ALC: the only dealbreaker might be the immutability question. Don't have a full answer for that yet. If we could solve that could achieve equivalency. Users expect it to work the way it does today. We can achieve bug-for-bug equivalency, but do we want that? Discussed earlier, the machinery for either flow is basically identical, we capture before an await and figure out which one the other side. It is reasonable to support both flow models. APMs might not care to adopt ALC until it does flow-through. Having equivalent capability to what we already have isn't attractive. Having a fix for issues is. Maybe it's faster or more compatible or supports browsers.
Stephen/Bryan: do APMs care about browswers? Maybe they don't because of the lack of this stuff
Nicolo: all the frameworks that transpile picked semantics of (?) vite, angular, svelte. If there are other use cases ?
Stephen: in the browser, trends have followed functional flow with React. In Node things have tended to be more procedural/event-based so this is still a fit.
Chengzhong: this is not just a map of the React concept, it's matching lower-level trends on other languages. If we want a change in tracing model it would be better in opentelemetry or traing libraries first to convince browsers it's worthwhile. Node.js is not the only runtime that implements ALS API, we also have the ? runtimes as well. Fair to say we have different use cases but - inaccurate to say ALS is a bug(?)
Stephen: various users complain that it is a bug - subjective. An argument for two workflows
Chengzhong: prioritise APIs that are universally present in other languages and adopted in the community. That's why we're prioritising bringing ALS to the web. There's still design space to support another workflow. Need to prove that it's going to be used and worth implementing at the language level
Stephen: proof of value is a thing that worries me. If we implement this and go and after for it, with it being slightly different, are they going to question why a 2nd thing is needed. Harder to add to something that has shipped after it has shipped than before.
Chengzhong: indeed need to prove a new flow is needed
<!-- end of the queue -->
Sanofi event is this evening, from ~now until 9:30. Best to start at 6:50(?) 30 minutes away from here.