RFC: rules_nodejs fetch/install tarballs

Summary

Bazel rulesets must contend with the problem of connecting third-party dependencies from outside the repository. Google itself vendors all dependencies, so this is a novel problem the Bazel community must solve.

Rulesets take a variety of approaches, with varying tradeoffs.

We can divide the problem into phases:

  1. Fetching
    • What tool does the downloads and caches them?
    • Are all declared artifacts downloaded, or only those needed for this build?
  2. Installing
    • Are all dependencies installed, or only those needed for this build?
    • Where are the dependencies installed to?
  3. Resolving
    • How do only the needed dependencies end up as action inputs?
    • How does the runtime know where the dependencies are installed?

Fetching dependencies

First, the user must specify what dependencies they want.
This should involve a lockfile that pins both the direct and transitive dependency versions for reproducibility.
It should also include an integrity hash for each artifact so that the downloader can avoid reaching out on the network.

Choice: Bazel downloads

Bazel has a full-featured downloader available to repository rules as repository_ctx.download() 1
It uses a local cache separate from the repository cache, so artifacts are not downloaded again even if the repository rule re-runs.
However the downloader must be called from starlark code.

rules_go uses Bazel to download artifacts, but requires the user transform their go.sum file into a deps.bzl file containing go_repository rules, and downloads them into independent Bazel external repos. This allows each artifact to be referenced like @com_github_mgechev_revive//:revive

Choice: Package manager downloads

The alternative is to have the native tooling do the downloads, such as a package manager. It will cache artifacts in its own global cache, which can introduce race conditions if Bazel calls it in parallel and it isn't intended to be used that way (e.g. yarn's mutex flag)

Current state in rules_nodejs

npm_install and yarn_install repository rules always install all the dependencies listed into a single external workspace.

Actions can then depend on individual packages and their transitive dependencies. This means that the action graph sees O(1000) files for a package like react-scripts which has many dependencies, causing long action setup times for sandboxing and remote execution.

Proposal

Proof of concept is at https://github.com/alexeagle/rules_nodejs/tree/npm_install

Summary:

  • new repository rule npm_fetch_tarballs
    • given a package-lock.json, download all tarballs to a single external repository, add to an npm cache, and mirror the dependency graph to BUILD files.
  • new rule npm_tarball
    • has no associated actions. Provides NpmTarballInfo which represents one tarball, its package name, and versioned dependencies.
  • new rule npm_install_tarballs
    • given a list of deps that provide NpmTarballInfo, run a single npm install command that runs purely offline and produces a TreeArtifact called node_modules. Also provides ExternalNpmPackageInfo
  • modify existing rules to account for TreeArtifact
    • Bazel (RBE protocol) doesn't permit a labelled file within a TreeArtifact. So nodejs_binary, npm_package_bin and others (?) will need new string-typed attributes indicating the entry_point

Open questions:

  • Can we avoid downloading all tarballs, and just download those needed for the build?
    • We could use http_file but it only accepts SHA-256 and package-lock.json doesn't give us that.
    • We could make the user translate their package-lock.json into a *.bzl file like rules_go does.
  • If we download all tarballs, but then only install the ones needed, does that give the performance boost we're looking for?
    • How long does download typically take?
    • How long does install typically take?

Details

Lockfile version support

Let's start by only allowing package-json.lock files with lockfileVersion=2 which is created by npm 7. We can later try adding support for other lockfile formats.

We need two implementations of lockfile reading. The first is in starlark, which is pretty trivial - we just need to parse the "version", "resolved", and "integrity" fields. We can use the new json module in Bazel 4.0, so this feature requires that upgrade. The lockfile can permit this structure to be nested, and starlark doesn't allow recursion. In practice we can just walk a fixed number of levels down the tree using nested loops.

The second implementation is in JavaScript where we translate the dependency graph into BUILD files. Ideally we'd use the @npmcli/arborist library to read lockfiles which would let us abstract away the file format. However we would need to vendor that library and its dependencies to avoid the chicken-and-egg of needing to fetch dependencies in the implementation of our dependency fetcher. I tried using rollup/terser to make a self-contained JS file but the result was still 1.3MB which is larger than our whole "built-in" release bundle so it seems too heavy.

Naming the tarballs

We get to choose a name for the files we put on the disk. The "resolved" field contains something like https://registry.npmjs.org/@istanbuljs/schema/-/schema-0.1.2.tgz - we cannot just take the basename since the package scope is needed for namespacing. We can do something like yarn does for their cache

% ls /Users/alex.eagle/Library/Caches/Yarn/v6 | grep rollup

npm-@rollup-plugin-commonjs-14.0.0-4285f9ec2db686a31129e5a2b415c94aa1f836f0-integrity

Whatever we choose here needs to be written by the starlark download code and then read by the JS code.

Getting Bazel to understand the package-lock.json integrity hashes

I already made a fix upstream in Bazel to understand SHA-1 which was the only missing hash.

This is included in Bazel 4.1 so users must ensure they're on that version (the .bazelversion file should be used for this purpose)

Populating an npm cache

Npm can run in an offline mode, which ensures that the npm_install_tarballs rule is hermetic and doesn't try to reach out on the network.

To make this work, we need to create a cache folder that matches npm semantics. By placing ENV[npm_config_cache] = <path to external repo> we can make the npm tool find the cache living in the external repo, which makes good semantics since Bazel will clean that folder at the right time.

One way to do this is by running npm cache add xx.tgz, with one subprocess for each tarball. However in the rules_nodejs repo as an example, this takes many minutes to complete.

To speed this up, we probably want a batched mode to add many tarballs to the cache. We could try to get such a thing upstream, but it will mean users need an even later version of NPM.

Another option seems to be to peel back one layer. npm cache add just calls through to the pacote library here
https://github.com/npm/cli/blob/6141de7b9e866979bf51706fd1be10c54235cf08/lib/cache.js#L97-L103

This introduces the possibility of version skew, though. We'd have to ensure that we vendor a version of pacote that creates a cache which works with the version of npm consuming it.

Bazel extracting the tarballs

Sadly this doesn't work because the stripPrefix isn't constant across all npm packages. Most have a top-level directory "package" but some don't, like @types/rimraf@2.0.3: Prefix "package" was given, but not found in the archive. Here are possible prefixes for this archive: "rimraf".

Code sharing

We already have the internal/npm_install/generate_build_file.ts program, which has a lot in common with the build file generation needed for this design. It also writes syntax sugar files, such as index.bzl files under packages that have bin entries in their package.json. So we probably want to write new tests around that script and augment it to run under two modes, either "produce js_library" (the existing mode), or "produce npm_tarball" (the new mode).

Select a repo