@sheplu gives an introduction of the session.
10:30-10:45 Coffee break
Issue: https://github.com/openjs-foundation/summit/issues/393
Facilitator: @ronag @mcollina
Matteo: We shipped fetch
in Node.js, stable since 2023. Part of the decision that was made was to split it from node:http
/node:https
, which is close to "unmaintainable", because any change could potentially break Express. At some point, fixing bugs would create more breakage, so it's just not worth it.
Matteo: Undici API keep low-level and high-level APIs separated to avoid repeating node:http
situation.
Matteo: Undici has a more extensive test suite than node core
Matteo: What do we want to do?
Matteo: in order to configure fetch, I need to patch undici. there is a global dispatcher that is a symbol property of globalThis. People need to customize it in very convoluted ways.
Matteo: Anyone aware of the proxy problem?
Jacob: which one?
Matteo: Normally people use an env var. Previously npm team implemented the HTTP_PROXY
env variable for npm to pick it up with an agent.
Matteo: Undici is implementing the env var for the undici.
Matteo: question: should we expose the proxy env var support to the users?
Matteo: there's no standard. Libraries implement it in their own ways.
Matteo: Showing slides for undici.
Matteo: we have a built-in mocker.
Matteo: question: should we expose the set global dispatcher function to users from Node.js
Jacob:
Wes: Is Undici leaking the APIs stright into Node.js? Why don't we ship all the APIs?
Robert: I am a bit against it because then we can't change things. Semver is difficult to manage. Issues with composibility. I have a side project call next-undici. Exposing Req/Res/Pipeline is okay. Others less so. We should only expose the useful parts.
Matteo: seconding Robert. There's no clear cut answer.
Matteo: users keep asking for ways to change settings, that's why we invented the global dispatcher.
Wes: is the difficulty of changing things the only concern about exposing everything
Robert: AFAI am concerned, yes. The ecosystem depends on how http works historically.
Paolo: Consider that in the future the Web Server Framework might also choose to implement a client and this makes the situation more complex. Anyway we could mark as experimental. Whatever experimental
means nowadays. Finally, we could expose a public API that is consider stable
(in a loose way), then have a semi-public API and then a fully private API.
Marco: can we expose node:undici
behind the flag to expose low level stuff. And gradually moving the http stuff to that module.
Matteo: impossible, if you break http.request
you break half of npm
Matteo: we are thinking about deprecating http.request
Robert: maybe removing the documentation
Raz: If we deprecate http.request then semver major of undici needs to be coupled with our semver major. Why not just expose it if we already will have that problem
Robert: we are trying to minimize the breakage. Currently experimenting with hooks
Darcy: is it possible to make it compatible with other runtimes if we are trying to make it a long-term thing?
Matteo: Deno currently does not support connecting to a UNIX socket. So currently you need to just use undici on Deno. I'm impressed with the flexibility with their API.
Matteo: why do we need to expose a dispatcher API that sits beneath fetch
Dan: how does undici in Deno work on top of this?
Matteo: they implement the node API. It just works on top of that layer.
Qard: I could see this being valuable in other environments. The browser has ServiceWorker which does something similar. Could we make a more universal standard for this?
Robert: it would be difficult to get it to a point that supports that. At the moment we are focusing on fetch and trying to get it out soon. There's still the option of leaving it outside to make the semver independent from node
Darcy: That's a very good point, not all Node.js apps need a HTTP client.
Jacob: feels like we should something else, otherwise users a keep going to use the deprecated one.
Wes: we need to have some kind of layer for libraries to make things work in both the browser and on node.
Ruben: don't think removing doc is actually possible. But we can strongly recommend using the new API first.
Paolo: Maybe we can ship it as @nodejs/undici
, this signals that it's something officially supported, but it's decoupled with our semver.
Wes: I think we are going to end up with the same problem no matter what and this may be confusing to the users. Maybe we need to bite the bullet and say that this is just what it takes for the user experience.
Robert: Maybe a compromise can be that we ship something like 95% in core and for the 5% power use case we direct people to the package.
Paolo: Bold sentence here. We can't always make our users happy. Sometimes we have to unplease them for the greater good. Moreover, we can't lose our mental health just to keep them happy.
Ruben: How should we document this well?
Jacob: We should have a Learn article about this
Matteo: we have no doc on fetch, we just direct people to ~the standard~ MDN. We'll need help on the docs.
Ruben: the doc are like 10 years old and difficult for beginners. Our docs is not the first hit in Google.
Ruy: API docs can be intimidating to beginners
Ruy: That API to customize fetch calls using undici
shown by Matteo earlier should really be in our docs today
Marco: it's important to start documenting the differences between undici and the standard
Matteo: FinalizationRegistry is finally fixed in Node.js and we can use it now. Currently in the browser you can start a request but don't consume it or clean it up. Undici tries to use FR to clean it up when the request gets GC'ed.
Robert: Another solution would be to have a core API that lets user-land modules manage memory, and then Undici would not need to be in core.
Joyee: Has a session tomorrow on native memeory management which could be relavent for more reliable cleanup upon garbage collection
Paolo: In the browsers anyway they probably do fetch-and-forget when unloading the page. Anyway most likely the browser are in a better situation as when the user changes the page they can just kill the entire JavaScript VM without properly cleaning up resources.
Matteo: Should we expose the proxy env var parsing in 22?
Ruben: who can decide?
Matteo: doesn't need to be stable, just a question about whether we should do it.
Ruy: do we want a rebranding of undici? People using that API 10 years from now might find the naming confusing.
…
12:15-13:45 Lunch break
node run
v.s. node --run
debatenode run
: https://github.com/nodejs/node/pull/52190node run
but there are blocks: https://github.com/nodejs/node/pull/52267run/index.js
or test/index.js
for the case of a node test
sub-commandnode --run
because it's backportable, especially 20, there will be shorter adoption cycle. In parallel we can deprecate node run
on 22.node run
doesn't do all the package manager does. And make it explicit that it's intentional. I am now leaning more towards the flag than the subcommand. People do run node test
which is $cwd/test.js
. I think that's a pretty cool feature and adding subcommands would break that workflow. I think it's enough to justify a flag.--run
also sets an expectation to users that this is a different implementation from npm run
node [--]run
should be the right way to do it. We should figure out how to pave the way for people to do it right. I am thinking more about using it for developent time.npm
not being able to bundle itself, maybe one possible thing is to avoid doing a global install of npm
. I think that may be pretty promising to help them speed it up.npm
globally because the permission is difficult. This includes our own installer.node --run
implementation (ref). This group is a great place to collaborate on issues like this and hopefully we can get back on having regular calls in the future.package.json
interoperability.pnpm start
to start their projects.pnpm
shipped a feature that broken the usage of Corepack pinning (?)npm
fixes an issue today it takes 3-4 weeks for that fix to land in node. It would be nice to also fix that.npm
ship faster fixes be part of the scope?eslint
to Corepack and have that be the way to distribute that package. The scope gets big when you don't define a criteria for what gets added.npm
at the moment.nvm
is the recommended way to manage node versions today.nvm
does not do that today.nvm
and rather solve the problem of managing different node versions, package manager versions for a given user project.npm
in the past.v8
and npm
most other dependencies are hidden API from users that we could potentially replace at any time.nvm
to manage the node binary version, corepack
to manage the package manager version.curl
a different package manager alternative at install time.devEngines
proposal, maybe it's something that the node runtime itself could read from. Might be best of both worlds.node
runtime via version management, and allows the user to choose what package manager, if any, is installed alongside node
.node
upon installation.