owned this note
owned this note
Published
Linked with GitHub
# Node.js collab summit 2025 Paris, Day 2
- Zoom links: [Day 1](https://zoom.us/j/94250159683), [Day 2](https://zoom.us/j/93791639410)
- Notes: [Day 1](https://hackmd.io/@NI182l8fTYSk-nr0xebGLA/Hku7_NvTJl/edit), [Day 2](https://hackmd.io/@NI182l8fTYSk-nr0xebGLA/BkMHFVvpkl/edit)
- [Detailed information about the summit]( https://github.com/openjs-foundation/summit/issues/433)
- [Schedule]( https://docs.google.com/spreadsheets/d/1_FNNbKYcc032NIUqeJ3SZP3PM71aYMuv9D5JbwaS9Iw/edit)
- [Detailed guide on attending in person]( https://hackmd.io/@NI182l8fTYSk-nr0xebGLA/Sy0Ze4_p1e)
- [Code of Conduct](https://events.linuxfoundation.org/about/code-of-conduct/)
- Slack channel: [#collab-summit-paris-2025](https://openjs-foundation.slack.com/archives/C087WL02FEY)
## Next Steps for Single Executable Applications
Facilitator: @lukekarrys
Scheduled time: 10:30 CET
https://github.com/openjs-foundation/summit/issues/447
<!-- good morning -->
**Luke**: "SEA"s for short. Not a contributor to the feature, but enthusiastic about the feature and want to see it succeed. This is a quick preview and a run through of issues I think are important to address.
Our use case is vlt, turning our CLI into a binary. Our development is all in Node, but today we use Deno to compile for distribution. We would prefer to use the same runtime for both development and deployment, Node.js.
ESM Support - SEAs require a single CJS file at present. Folks typically aren't objecting to the single, bundling, requirement. We're using esbuild. TLA/import() in the graph not available atm.
No `NODE_OPTIONS` (or flags?)
No ability to freeze the `NODE_OPTIONS` for the lifetime of the script
A more streamlined API for (?) would be good
Larger issues:
* Virtualized file system
* Built-in bundling
* Folks seem to be adept at using bundlers
Open Questions:
* What is the best way to contribute?
* Would it be helpful to facilitate WG meetings?
**Darshan**: It's great that you've compiled this state of the world. Don't think _most_ users of Node would use themselves, rather a thing companies would benefit from. Reached out to some of them, and what I've gathered matches what you present here.
* Injecting CLI arguments - would probably require experimental flag
* ESM and TypeScript support
* Programmatic API
Darshan: About programmatic API, would be useful but postject currently builds on top of a dependency and it is big.
PKG does support (that?) already. Though it was deprecated after SEA came out. The maintainers created a fork that they maintain. Blocked on latest versions of Node on one issue. They support sourceless SEAs. That's not currently possible as it creates binaries for other OSs than your current one. It does this using code-cache. They patch Node and V8 with own not-upstreamed code to achieve this. Unstable -> blocked. This would require upstreaming to both to stabilise. Couldn't invest much time.
Next question: How. This had full-time investment from Vercel, not reasonable to think it could be done without FT investment. Reached out to companies that used to try and fix this gap. Also I, Darshan, could be contracted to do this work.
Alexander: What about calls from Node.js to other languages. Like calling `git` or `ffmpeg` - has this been discussed.
Luke: Bundling, or calling out to it, or?
Alexander: Are we just reinventing containerisation?
Luke: I've mostly thought about CLI example that calls out to things
Alexander: In npm these dependencies are just magically assumed. Make sure the headers are there for the node-gyp build, or that `git` is in `$PATH`. OCI maybe provides a solution here but they are not double-click runnable
Darshan: don't think this desire is covered by an Issue at present
Joyee: yesterday Marco and I were discussing SEA build flow. Wondering why it has to be done in a separate command instead of part of Node executable. postject(?). It appears to use a dep to do some binary surgery. The command that gets distributed via npm is WASM. Afaict it's just built in and we don't go through the wasm layer. Why separate
Darshan: the leaf dependency, is huge (and the wasm is therefore huge). Consider this would be too big to bundle. Could make it smaller by splitting things out.
Joyee: How big? Shouldn't be that big [citation needed]
Darshan: No numbers, idk it's in postject repo
Joyee: Linker should be able to trim unused stuff. Is it FUD that it is so big post-compile?
Darshan: Could pass linker flags to cmake, we tried doing a bit of that in the past, but no substantive improvements. Could be something that we try to invest in. I think better investment to have something in pure JS for ease of use.
Joyee: Sounds possible. It's just binary surgery
<!-- Thomas: sounds like a joke on "it's just brain surgery" -->
Luke: Sourceless, a feature of PKG, could you say more about what that means?
Darshan: In PKG when you compile JS application into an SEA, if you're an enterprise and you deploy code to users you don't want them to look at the source. That's currently easy to do, everything in lib directory is inside the binary. Any kind of editor would reveal this. It creates a code-cache out of your code. This is what required the afforementioned patching. This creates the V8 code cache for direct use without source code access.
Luke: was aware of that feature for the use of performance, hadn't thought about it from the POV of ~obfuscation
Thomas: How obfuscated is code caching?
Darshan: Not straightforward, but reversible. Did see a blog post about it. It's complicated though
Darshan: not sure if SEA is something that qualifies for a WG, they tend to be indefinite lifetime things? Don't have the rules to hand. Not currently a WG. We could definitely have weekly meetings, but so far favoured having ad-hoc meetings. Often folks will have no updates, so weekly might not be super productive.
Joyee: there were past discussion - difference between WG and not is the chartered-ness. In recent years we have skipped the formalities. Typically: have a repository, GitHub team, create a cross-org Label to collect meeting agenda, ask Michael Dawson to use automation to create a meeting Issue, and folks with Zoom access make it happen. This may not be documented.
Luke: "Team" sounds like the correct term. Not intending to schedule a weekly, but want to promote some synchronous comms.
Joyee: People typically have a Slack channel and a Label to track things. Up to the team but that's a starting point.
Darshan: organically most of the discussion has happened on the Issues. the Slack channel was spawned from an SEA member. All forums are quiet atm.
Stephen: Virtual FS also came up in context of zip archives. Just saying it has multiple use-cases. M Neissen (of yarn) opened a PR to add support for zip, think it has stalled. Will add a TSC Label to that issue to try and make progress there.
<!-- we hit time, 11:07 -->
## Node.js embed Undici
Facilitator: @mcollina
Scheduled time: 11:00 CET
https://github.com/openjs-foundation/summit/issues/445
**Matteo**: James opened an Issue to make `node:http` as legacy, that's where it began. https://github.com/nodejs/node/issues/57277
Node's `fetch` uses Undici. `node:http` internals are complex. The only way it is extensible is by hacking stuff/monkey-patching its internals. It uses the Socket/TLS connections as its core primitive. This causes problems with supporting HTTP/2. Hard to extend, ...
The solution is a diff architecture, a client is a wrapper over a single socket, multiple sessions schedulable on a single client. The concept of a pool. The concept of an agent which is multi-host.
<!-- Matteo shares a diagram of the architecture -->
Dispatchers, interceptors, retryInterceptor, dnsCacheInterceptor, cacheInterceptor.
Should be straightforward to add HTTP/3 support to this system.
We use `llhttp` via WASM rather than a native addon. This is relevant in jitless mode, `fetch` is unavailable in jitless mode.
We think `http.request` is not good for production use cases. Whack-a-mole bug-fixing, fix one and more pop up.
Undici is a very active repository. Good download trajectory on npm-stat.com, ~3M downloads/day
People are doing things to replace the global dispatcher with their own, to change some configuration. Some of this is resolved in Node 24 with native `http[s]_proxy` support (?), thanks Joyee! There are still some concerns about exposing more of the undici API directly to JS.
Looking at PR to improve `fetch` docs.
undici.request has a friendlier API that also supports things like `Promise`. Benchmarks well.
*Discussion about it weird having to juggle install of undici and the internal implementation*
Matteo: most people using http in production are having to use an http-agent package at least
Tierney: I have dealt with more issues with http client packages on npm than I had with core
James: we have an HTTP/1 client API, an HTTP/2 client API, and some day I'd like to finish HTTP/3. The existing stuff that's there does not allow for me to add that without breaking everything. We need a revised API that can work with everything and it's not the one we have at present.
Matteo: any objections to moving it to core?
Paolo: people need to configure their agents. Why don't we expose the entire Undici API?
Matteo: today we bundle a subset of Undici, because we only needed `fetch` we saved on bundle size. We could expose configuring the agent on Undici, that would be sufficient. That would satisfy `fetch`. `fetch` has performance concerns due to spec requirements and Promise ticks and and and.
Paolo: if we exposed entire Undici, people would use the client without `fetch` - how much would that grow the bundle
Matteo: don't expect much. We would need to do a few things first. Steps: before exposing in core, stabilise the dispatcher API. Needs some work. Added http-caching in v7, needs more cleanup. After that, possible to expose it. Some critical points: docs. Undici has horrible docs at present, work is needed. Putting it in core would exacerbate that. Will we keep exposing it via deps, or lifting the code or making it internal?
Paolo: in terms of userland clash, could we let people use their bundled version as the backend of `fetch`(?) e.g. Undici 8 with Node 24, not compatile.
Matteo: right now the architecture is such that it is built in a way, Undici 7 can be used for `fetch` in Node 18,20,22. Node 18 is based on Undici 5. 20 -> 6.
Paolo: so they wont be incompatible
Matteo: it detects compat and decides what to use. We only support LTS versions, not arbitrary support window.
James: In WinterTC we are looking into a proposal (agent support in fetch?)
Matteo: 2 concerns: (1) WinterTC has become a pay to play group. As part of Ecma.
James: The committee is much more open for discussion participation than described
Matteo: (2) I have a strong feeling that if we had to support WinterTC's fetch proposal we would have to start from scratch, unless it's totally opaque, which makes no sense. WhatWG streams would need writing in C++ with a direct net layer on top with no conversions. A lot of work. Who will pay? Maybe 1+ year(s) of work. Not opposed to that direction (vs. Undici) but unclear how it would be funded.
James: there is no concrete WinterTC proposal and nobody is talking about reimplementing anything. The discussion is around standardising the API not reimping. Could the API not be extended/modified? The Undici Agent could influence that proposal in ways that ensure alignment. Just mention it because these discussions are starting.
Matteo: `fetch` - what does a custom agent do? Typically something that works at the TCP level, and does something before and controls low-level http. e.g. you need a custom agent for doing proxying. How does http proxy work? You create a TLS socket to the proxy server, and inside that single socket, multiple requests. The http proxy then receives those requests and routes them. This is a lot of streams work. Unless the agent is completely opaque (what Undici does). To be fast we have no streams at all, no Node streams, uses a custom abstraction. Complex code. Don't expect this kind of system could be wrapper around an external agent. If that proposal comes through it could be very hard for us to implement it, not impossible. If we expose the Undici API there will come a time when we need to make decisions.
James: some of this seems premature, we don't know the direction of the proposal. It literally just came up as an idea. If this is needed in core, know that Deno and Bun both need it, it warrants at least some coordination. I mention it only as a direction to work towards, not as a sudden shift.
Matteo: if we decided to expose Undici in core, we subject it to the LTS approach, meaning long term we have considerations
James: that's not new, we'll have AsyncLocalStorage "forever"
Matteo: we want to expose at least the agents, and the requests. We have a few APIs:
* pipeline
* stream
* request
* dispatch
Only request and dispatch make sense now
Paolo: remove those before exposing?
Matteo: indeed. Maybe stream has some helpful things. Undici stream was meant to be faster than request, you could return a stream right away for a body(?) - didn't turn out to be faster. We were able to make request as fast. Pipeline is just bad :grin: pipeline is compatible with Node streams
Paolo/Matteo: jitless disables Undici. Is it common?
Matteo: no, I would not bother
Marco: TypeScript support has this problem to as it's WASM based.
Matteo: in core we could in theory use llhttp from code. The Undici WASM is faster than the C++ bridge btw.
Paolo: I'm still willing to finish Milo, the new parser that is more maintainable than llhttp. That C++ bridge would disappear if it becomes the default parser. So maybe don't lean in to it
Matteo: we would need a new API
Paolo: I was losing perf with Milo in WASM mode (is it "Milo"?)
Marco: Toolchain support is needed for build. Amaro is compiled to WASM in a docker image.
Matteo: Michael Dawson/IBM standardised these libraries. Undici already use the Docker image for compilation. They take that image and generalise. Used for Amaro too
Marco: everything built in it. As we ship it as npm we need WASM
Matteo/Paolo: we can further consider jitless
Matteo: do we want `fetch` to use HTTP/2 by default or not
Paolo: and should we mark http.request as legacy
Matteo: undici uses http2.request because it's low level enough. http.request is badly abstracted, both high and low level simultaneously
Marco: long term the goal should be 1 client. Maybe eventually we can unexpose http2.request
Matteo: it's not in unmaintained state, but we dont add new features. Feature-complete maintenance-mode. HTTP/2 spec was a lot of complexity with a lot of improvement for CDNs, but less for servers themselves. Questionable improvements, poor stats, especially in lossy environments. Head of line blocking in TCP. HTTP/3 is UDP/QUIC and so forth, so it can avoid these problems in theory - some day for us :grin:
James: arguably trading one set of complexities for another. The Node process has to do so much more to support QUIC. On HTTP semantic layer they are identical. As long as we work towards the goal of having a single HTTP API, happy with whatever it takes to get there. It's not sustainable to have http.request do those other things.
Matteo: Q for the room, HTTP/2 by default y/n, semver major or minor?
James: what does on-by-default mean?
Matteo: if connecting to TLS server, it will send HTTP/2 ALPN, it will upgrade the socket to HTTP/2 naturally instead of /1. Only for httpS connections. If target server advertises.
James: Have you implemented the alt service and DNS discovery based mechanisms
Matteo: no, based on https
Paolo:
Matteo: when TLS created it advertises the protocol
?
James: not a blocker, but we might want alt service support. There are ways for HTT/1 to advertise HTTP/2 elsewhere.
Benji: most platforms, Java, Go, have the behaviour Matteo suggests. Not aware of complaints. Am for the idea of doing by default. Other platforms announce breaking changes
Matteo: when it does a TLS connect () protocols, we pass in HTTP/2. We send it out, server decides what it wants to use. Depending on ALPN chosen by the socket, it switches
Alexander: as an end user, we want as much symmetry with browser as possible. I don't want to internalise special-casing for Node. Defaults following browser would be useful.
Paolo: in favour of enabling by default. Recommend global getter/setter to disable.
Matteo: this would be tied in with Undici globalness
Paolo: no, a global API setting enableHttp2 to false
Matteo: not so simple, need to pass allowH2 flag.
Paolo: can things change dynamically?
Matteo: lots of static properties. would be a mess if you attempt to talk to a HTTP1 server with HTTP2
Matteo: Undici got HTTP/2 stable in 7 which will be in Node 24. Anyone opposed to allowing HTTP/2 default to happen in a minor during 24.x's life? (major window has passed). Or we wait for Node 25
Marco: it could be too late for 24
What would be difference?
Matteo: would be different in the production traffic we see in the servers
Thomas: there was a minor change to enable ipv6 by default and was quite painful for us. Would be grateful if this is major.
Matteo: 26 then
Alexander: can there be an opt-in period, making an announcement for now.
Thomas: you mean an experimental flag to enable it by default in 24?
Paolo: ??
Matteo: okay, makes sense. Enable by 25. People can disable with the flag if they want to.
James: H3 would be back on track once we have OpenSSL 3.5
Richard: FYI I was looking into NodeSource compilation issues with OpenSSL 3.5. That's of our (RedHat) interest.
James: we (CloudFlare) can also dedicate some cycles to it
## Node.js integration with Chrome DevTools
Facilitator: @legendecas
Scheduled time: 13:30 CET
https://github.com/openjs-foundation/summit/issues/439
Chengzhong introduces the DevTools protocol and Node.js integration status.
The CDP supports inspection other than JS features, e.g. web storage, caches.
Node.js currently has Worker/Network inspection. It's now tier 2.
CDP frontend is avaialble on most developer machines. Network inspection/interception does not require proxy or mitm certificate support.
Ongoing collaboration with DevTools team. Follows Google's workflow: design docs -> review -> submit CL.
Challenges: new domains (network, target), unsupported features (performance timeline)
Phase 1 - Network inspection: M134 dedicated frotend has network panel now. HTTP and fetch inspection. No websocket inspection.
<!-- Chengzhong demonstrates the network panel. -->
Phase 2: ResourceTiming, DiagnosticChannel Performance Timeline, EventSource (HTTP/2), WebTransport HTTP/3
Phase 3: Network Interception Domain Fetch
Worker integration
Only VSCode supported NodeWorker, plan is to support CDP Domain Target, which is stable and allows auto discovery.
<!-- Chengzhong demonstrates --experimental-network-inspection -->
Call for collaboration: side-effect-free data inspection with readable streams
- readable data event makes the stream flowing mode
- writable data cannot be observed without patching
- Undici's websocket and fetch diagnostics channels do not have request idendifiers. don't know where it's coming from.
Call for collaboration: worker integration. Implement Target Domain, and maybe Target.exposeDevToolsProtocol
Matteo: don't think the phase 1 is correct?
Chengzhong: needs extra setup
Matteo: yes, it does work but is not great
Chengzhong: it's an improvement we can make
Matteo: yes.
Matteo: diagnostics channel to all the ?
Chengzhong: ..
Matteo: if you open an issue in undici. Give a clear explanation with your feedback. The team would be happy to add them.
Matteo: on the data event. This is hard. Not sure if it'd be doable. For fetch, getting the full body would be 100% doable. Chunks would be doable. For undici it's okay, builtin http I have concerns.
Matteo: I have a list. You can check them.
Stephen: This talks about the client side. There were some discussions about server inspections. How much work would it need? How much work can we do instead of asking Google? ..tabs?
Danil: do not see why not but need to be careful about how we proceed. Incoming is differnt from outgoing. It can confuse people. Should probably be in a separate view. On a protocol level should be in a different domain. Different kind of requests, different kind of data, we don't want to mix them up.
Danil: need to look into specific .. for extensions. Network panel is mostly tied to the web use case. Not fundamentally against it. Would be hard to take on speicific Node.js use cases since we need to prioritize Chrome. If it benefits Chrome then yes. No need to hide UI but unlikely to build dedicated UI. But open to review PR.
Stephen: about distributing work. With QUIC, not sure what's the plan about implementing QUIC in the browser. Wonder how much changes it'd need to generalize socket. Could we have more infra to see TCP sockets.
Alexander: being able to see child processes would be useful. like data being passed between IPC.
Danil: something we've recently added is very similar to TCP sockets. There's more interest in supporting socket debugging. Open to work together.
James: on QUIC. As it progresses, we'll have more primitives to tap into these things. Can add side-effect-free observation, and can be prioritity.
Watson: I use the inspector protocol API. How much interest does the Google team has in supporting that use case. Like when setting a break point, have an option about allowing side-effects. Is it a no go to change these things? Are they relevent for Google?
Simon: generally yes, if it makes sense. We'd need to see if makes sense. Not sure if it's possible to support it in the debug evaluate. Feel free to write a design doc for us to review. Have tests in the CL to make sure it doesn't break. It's a feature-by-feature decision. Not opposed to adding new features, just not prioritized. Most developers don't use debugger, or at least they use conditional breakpoints.
Watson: the more work we can do ourselves, the better?
Simon: yes
Danil: Cannot break existing use cases. Everything related to V8 is shared with V8. So it's different. CDP is just implementation detail. We don't want to make people's lives harder. If it's very costly in the implementation then no.
Ruben: in the past few years it's been harder to collaborate with Google. What kind of communication is best?
Danil: for me I prefer emails. we are also a different team now.
Stephen: is there interest in exposing it as C++ API instead of having to go through serdes.
Danil: core debugger..some potential to add extensions, not sure
Chengzhong: most of inspection is implemented by Node.js, not needed to be done by V8
Stephen: it'd be useful for APM to inspect more permantly
Danil: skeptical about the perf impact. Chrome does IPC with this serdes all the time and they are fine. Need more data.
Simon: Node.js does not consistently check UTF8 or UTF16 strings. I sent a PR but it didn't pass CI. That might already come with perf costs. Unless Chrome moves away from this page inspection model with different process in the browser it's unlikely.
Stephen: (talks about overhead)
Danil: Unlikely V8 would expose an API that does not work for Chrome.
Chengzhong: attaching devtools in production comes with a lot of overhead.
Stephen: yes, but a lot of people are doing it
Benjamin: I do it.
Watson: not optimzing after debugger is attached - isn't that fixed?
Simon: async stack traces would make it slow since it needs to store stack traces for the inspector. breakpoint too because it only works on ignition bytecode. Not everything deopts. Depends on what you use.
## Node.js Next 10 Survey
Facilitator: @marco-ippolito
Scheduled time: 14:30 CET
https://github.com/openjs-foundation/summit/issues/440
<!-- Marco presents slides -->
Doubling of survey respondants to 3000. We used SurveyMonkey. We were told that they had a lot of stress because we were doing many last minute changes.
This year let's do it different. go through the survey together during the summit. Ignore all the last minute feedback.
Two weeks beginning NOW to give feedback. And also this 1 hour session.
Feedback channels: easyretro, PR on the repo
Deadline 16th April
General direction: no free response questions
Questions (see the REAL survey for verbatim questions):
* Demographic
1) Where do you currently live?
2) How long have you been working with Node.js?
3) What kind of organisation are you working in?
* Please help us get rid of "Other", they're painful to deal with
4) Which sector does your company work in, if applicable?
* Node.js Usage
5) Which of the following best reflects your role regarding Node.js? (multi choice)
6) How does your organisation invest in JS related projects?
7) What is your use case for Node.js? (multi choice)
8) What is your OS for local development? (multi choice)
9) What is your OS for production (multi choice)
10) What architecture is the machine you're running Node.js in production? (multi choice)
11) How do you get `node` executables? (multi choice)
12) What package manager(s) do you use? (multi choice)
13) Which Node Version Manager(s) do you use? (multi choice)
14) How do you manage the package manaer for your project? (multi choice)
15) Which current Technical Priorities are important to you? (multi choice)
16) Are there Technical Priorities that you believe are missing? (:warning: open question)
17) What is important to you? (multi choice)
18) What is important to you that is not in the list? (:warning: open question)
* Technical Questions
19) Are you using the following experimental features? (multi choice)
20) Are you using the following new stable features? (multi choice)
21) Do you encounter any recurring issues when using Node.js that you would like to share with us (:warning: open question)
* No character limit
<!-- feedback time -->
## Module customization hooks
Facilitator: @joyeecheung
Scheduled time: 15:45 CET
https://github.com/openjs-foundation/summit/issues/442
**Joyee**:
This session is about customisation of modules, specifically a consistent customisation experience for all kinds of modules. How we got here.
Before 2020 (ish, back-porting) there was `--experimental-loader` (deprecated!)
Transiling ESM to CJS tends to have unexpected outcomes. Someone(?) made it work for require. For async to work it needs moving off-thread and to be blocked on.
Effective for require, Async, In-thread, pick 2
Sometimes needed to mutate global context
Possible to run in to deadlock situations
Added another API, `module.registerHooks()` API that picks the not-Async trade-off option. Lots of things don't care about that, folks already had a synchronous constraint due to CJS, or a Babel Worker.
Status: `module.registerHooks()` release in 23.5.0, backport to 22.x in PR
What's next for `m.r()`?
Still some quirks: it reinvents require(). The hooks can get called twice. Absolute path the 2nd time instead of original string. Source is null in the load hook for CJS require, so have to do own disk reading.
Proposal: re-implement `module.register()` as a helper using `module.registerHook()`
Matteo: I have a module called "everysync" - it works on `SharedArrayBuffer`. I could send a PR and make progress there. It was built to prove this(?) was possible. ~100 LoC
Matteo: I think it should go in the worker thread module, not a generic utility
Yes, SharedArrayBuffers are growable.
Is it necessary to make things sync first? -> No
Matteo: If its multithreaded how would it work - the main thread running the code will sync block, call this thing until it gets called. It's kind of a middleware pattern? Need to call the next which is a sync call on the other side(?)
Jacob: it's not constantly jumping back and forth, only at the end. In between it's staying within the worker
Matteo: what about nextResolve() function
Jacob: when you call them they don't flip-flop between threads
Matteo: imagine having multiple of them registered - when you block main thread, (?) - when next is being called you will not be able to mix them at the same time?
Jacob: you can, but don't rely on them talking to eachother, there be dragons
Matteo: generally seems right to do only one or the other
Jacob: No, let's say everything ported from old register to registerHooks, and the newer is leveraging piscine: in that case it would be part of the registerHooks chain. The world pauses while it does it's thing and when it finishes the world resumes. Nothing stays in the worker, it is temporary.
Matteo: right now the hooks are like a middleware with a chain. After all sync hooks run
Geoffrey: each set of hooks that registers would be a diff thread instead of there being one loader thread off-thread that all the hooks shares. I don't know if it would be perfectly compatible. We're talking about this as far future/migration path. Are there higher priority topics for the next 10 mins?
Moving on in interest of time
Evaluate Hooks - response to things not being full enough to meet monkey patching use cases in the wild. Mocking & RITM, modify results or skip evaluation.
The proposal works on CJS, WASM, addons, ...
Does not work on `SourceTextModule` (ESM) due to ECMASCript restrictions on exports mutability and state management. Could allow observability with not mutability
Status: blocked on naming
`notSourceTextModuleEvaluate` would be an accurate, but unfortunate name.
We do not prefix things with `cjs` or `esm` normally
Arguably not just blocked on the name - consider that it not working on ESM is itself a blocker. If we mark CJS as legacy and don't support this it's mixed messaging for users and will result in bug reports. Guy is calling for V8 support to allow ESM inclusion.
Jacob: think the spec affords some wiggle room(?)
Joyee: mutability unlikely. We should just expose what we actually have the ability to do
Geoffrey: does this support the main things people want, observability?
Joyee: you can do trace events with CJS module evaluation. Reference to Vinicius
Geoffrey: Does only proxying only need observability
Bryan: observability for us in practice involves mutating, so this would not be useful to us. The next session covers this
Geoffrey: what if we do what Guy is saying and have the hook support LCD, the set that works across all (CJS/ESM/...) - harming CJS and co but at least consistent.
Jacob: if the goal is to remove xyz then
Geoffrey: how will people migrate from CJS if they are locked in with the feature set
Bryan: as long as monkey patching module compile is there, we will do that. We support older versions of Node. A new feature just for CJS is not particularly compelling for us, especially with the intent to move to ESM. Any other APM vendors?
Abhijeet Sentry: +1
Joyee: should we _not_ land?
Ruben: Should consider an option that is better than what we have today which is rife with monkey patching
<!-- lots of dialogue that I missed :'( -->
<!-- very overruning -->
## Enabling observability in ESM
Facilitator: @bengl
Scheduled time: 16:15 CET
https://github.com/openjs-foundation/summit/issues/444
**Bryan**: A convenient segue from the previous session
If anyone doesn't know me, I am both your host, and also a DataDog employee, accessible on a bunch of socials.
We do "APM"
* Instrument code, capturing state
* Build spans/traces
* Ship it off somewhere (positive)
Via (1) intercepting code loading, and (2) monkey-patching
The latter is harder with ESM.
This is/was done with RITM, pionneered by Thomas Watson (no longer in the room)
This slide has a sketch of how it works, basically replacing require with a version that wraps. But ESM is immutable!
Gus Caplan interned at DD, and did Module wrapping with a back-door to add mutability
We mimic external mutability with internal mutability.
IITM - Import in the middle
Made 2021
This is *the* APM vendor solution, APMs all use it, OpenTelemetry uses it
What if the libraries we instrument also depend on internal mutability?
Live Bindings :cry: result in opportunities for breakage
Solution: transform source code, easy to do in loader hooks. This can also be done in bundlers. We've been doing this since mid-2024. Modelled after what we did in Go (also DD supported). "Orchestrion" (a pun?) It does build time orchestration. That project was donated to OpenTelemetry. It's built on a DSL language in YAML to do this instrumentation. Aspect-oriented-programming style.
Result: `orchestrion-js`, releasing early next week hopefully. It's a Rust project, for performance and type-system. Using `swc`, as used elsewhere in Node.
Dynamically replaced ESM code with post-build code at runtime. Using `diagnostics_channel`, `tracingChannel`.
Also defined in YAML
What's next?
* OSS `orchestrion-js`
* Share maintainership with other APMs
* Build generic customisation hook wrapper
* missed
* missed
Some concerns
* `tracingChannel` not good for `class` syntax constructors due to `super`
* Right now veering towards `enterWith`
* Having to include a parser is not great
* Could we expose SWC AST? And expose it in the loader chain?
* The notion of a stable AST is challenging
* Would involve blessing one forever?
* Other native loader hooks
Joyee: curious about constructor case
Benjamin: stable AST concerns
Deep dive into the `constructor`/`super` case
(Missed a lot of notes)
Geoffrey: prefer build time to run time. You can safely error, which you can't at runtime. Excited by this approach
(I have run out of steam to take notes)