Important considerations when asking "what is slow?"
X% faster isn't always important but the end-user impact.
For example, dropping 1 minute off of a 20 minute CI won't change user behavior.
Instead CI time falls into buckets like "instantenous", "get a coffee", "work on something else", "come back the next day".
Rebuilds when switching workspace members from feature changes:
Within a large workspace, as the user moves around between crates, the enabled features might be different causing rebuilds of dependencies.
Check everything above current crate still builds:
Our rebuild detection should mean that people only see a slowdown from things that actually depend on what was changed.
However, when changing implementation details, only relinking should be needed and not rebuilding.
Even the re-linking of every test binary, even if you consolidate integration test binaries, can be significant.
Check everything above current crate passes tests:
Depending on your tests, this can dwarf the build times from above.
Being able to more precisely target what to test would help keep this down.
Precise caching:
Without careful attention to cached contents,
network transfers and packing/unpacking can overwhelm the benefits of caching.
cargo
Locking of TARGET_DIR:
The two can fight over access to the TARGET_DIR.
This can be worked around by setting a custom one or switching r-a to a profile
RUSTFLAGS:
r-a users reported they set RUSTFLAGS=--cfg=test
to get tests analyzed.
This causes cache thrashing between the two.
Besides rebuilds mentioned above (features, RUSTFLAGS
), it can be difficult to identify why a rebuild happened to avoid it.
cargo fix
Run cargo fix --workspace
to address some kind lint.
This is can be slow because cargo fix
serializes the build of each build target in each workspace member.
CARGO_COMPLETE
For the new completions,
we'll need Cargo to be fast for querying manifests and lockfiles for different state.
Teams:
Purpose: unblock other work
Re-organize intermediate artfacts in target dir from being organized along role to being organized along package+hash.
This will require passing every directory needed to rustc rather than it doing the lookup by hash.
Risk:
To clarify what we are re-organizing,
it is the build-dir from https://github.com/rust-lang/cargo/issues/14125.
Teams:
Hash the RUSTFLAGS so that a distinct intermediate artifact is created.
See also https://github.com/rust-lang/cargo/issues/8716, https://github.com/rust-lang/cargo/pull/14830
Teams:
Depends on:
Only exclusively lock immutable artifact dirs when being created, otherwise grab a shared lock.
See also https://github.com/rust-lang/cargo/issues/4282
Teams:
Give users the ability to opt-in to feature unification across their workspace so they can avoid rebuilding dependencies because of which package they select.
See also https://github.com/rust-lang/rfcs/pull/3692
cargo report
for previous buildsTeams:
Cargo logs rebuild information and allows playing it back (maybe with timings as well?) so users can look into why a previous build rebuilt
See also https://github.com/rust-lang/cargo/issues/2904
Teams:
As a profile setting (which includes per-package values), skip code-gen until linking all of the rlibs together.
This will be especially important for cases like aws sdk, windows-sys, etc.
This could even remove the need for a lot of the use of cargo features.
We may want to consider packages being able to set a default profile setting for when they are built into another package, like aws sdk or windows-sys being able to enable mir-only rlibs. See https://github.com/rust-lang/cargo/issues/7940
Teams:
Depedns on:
Risks
Teams:
Depends on:
Benefits from:
May conflict with:
We could share build artifacts between projects.
See also https://rust-lang.github.io/rust-project-goals/2024h2/user-wide-cache.html
Teams:
Depends on:
We could have cache process plugins to support reading and/or writing to remote cache stores.
Where CI's support it, this could also allow for precise caching between CI jobs.
e.g. a contributor who trusts their CI can enable support for reading the cache from CI. This would most help when dependencies get updated.
Teams:
Depends on:
Besides keeping disk space low, this can help CI have slightly more precise caching between CI jobs.
See also https://github.com/rust-lang/cargo/issues/12633
Teams:
May conflict with:
If a developer changes a test function or the body of a function,
only that rlib needs to be rebuilt and then the final result relinked, rather than rebuilding everything, similar to C++ development where .cpp
file changes only cause that compilation unit to rebuild while .h
changes cause all dependents to rebuild.
See also https://github.com/rust-lang/cargo/issues/14604
Teams:
May conflict with:
Commonly people put tests in a mod tests {}
.
If they change a test,
the production lib
gets rebuilt and everything that depends on it.
If the user instead puts it in a test.rs
with an external #[cfg(test)]
,
this goes away.
We could shift the workflow to encourage this, e.g.
Teams:
Re-linking test binaries can be slow.
The default path in Cargo is to have a test binary per test source file.
See also https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html
Option 1: consolidate test binaries automatically, see also https://github.com/rust-lang/cargo/issues/13450
Option 2: Add an "auto" mod feature and document a single test binary with it
Risks:
src/main.rs
and src/lib.rs
, how does each know what to pull in?Teams:
Improved in 2024 by having fewer things to compile
Teams:
May conflict with:
By changing the default linker, we can reduce edit-test cycle times.
The first step is switching the default linker to something else.
Once we've done that once, the cost for exploring whats the "best" default linker is significantly lower.
See also https://blog.rust-lang.org/2024/05/17/enabling-rust-lld-on-linux.html
Teams:
May conflict with:
By changing to a fast-to-compile backend for workspace members or tests, we can reduce the edit-test cycle times.
Risks
Teams:
Purpose: unblock other work (within the scope of performance)
See also https://github.com/rust-lang/cargo/issues/13040
Teams:
Depends on:
Automatically filter tests to only those that call into code that was changed since the last run.
Risks
cfg(version)
, cfg(accessible)
Teams:
By stabilizing these,
we take away the need to do version detection via build.rs
.
While this would require an MSRV bump, the MSRV-aware resolver can allow that without affecting previous users.
Combined with bumping the minor version field on MSRV-bump, a maintainer could even still offer the possibility of support to older MSRVs.
Note: the Cargo team previously rejected supporting users doing nightly version detection in https://github.com/rust-lang/cargo/issues/11244
Related:
build.rs
See also https://github.com/rust-lang/rfcs/pull/2523
cfg_value!("name")
Teams:
A purpose for build scripts is to read cfg
values (as CARGO_CFG_*
) and make them available at compile time.
By being able to read these directly, we can cut build scripts out of the loop, reducing the need for them
Example use cases
--version
, --bugreport
, panics, etcPotential cases to support
CARGO_HOST_TARGET
Teams:
A purpose for build scripts is to read TARGET
and make that available
escargot
uses it to get a target tuple for to help users programmatically perform buildsTeams:
If we can systemize the build.rs
s of -sys
crates, we can make them easier to get correct, be consistent across the ecosystem, and reduce how much build.rs
logic needs to be audited.
This may also make it easier to control for env variables accidentally causing rebuilds.
build.rs
Teams:
Depends on:
If we can delegate build.rs
to system-deps,
then that reduces the number of build targets that need to be built.
See also
Teams:
By moving as many macros as possible away from proc-macros towards declarative macros,
these macros are compile-time enforced to be pure,
making them easier to audit and reduce the number of dependencies needed.
Related:
See also
Teams:
In some cases this can help, in some cases it can hurt.
For proc-macros to leverage this, we'd need more insight into when to force an expansion (maybe not an issue if we migrate to declarative macros)
cargo fix
Teams:
Instead of running build-targets serially,
cargo fix
would shell out to cargo check
to build everything,
fixing what is reported through json output.
To avoid the class of problems that cargo fix
works around by running serially, we could cancel and rerun cargo check
once we've fixed suggestions for a package.
This would likely make it easier to implement other features, like an interactive mode as it would live in the top-level command rather than a rustc driver.
See also https://github.com/rust-lang/cargo/issues/13214
Risks
Teams:
When Cargo verifies the lock file and then builds, it does
We could instead encode a "verify lockfile" operation and a "filter resolve" operation, removing the resolver overhead completely.
Teams:
Cargo parses every dependencies manifest.
TOML is optimized for humans and not for machines.
We could use an alternative format for immutable dependencies,
whether caching it or publishing it in the .crate
(which wouldn't be available for git dependencies).
Teams:
Cargo performs a lot of analysis on manifest when loading them.
We could defer some of the lint analysis out to our regular lint analysis phase so we can know whether its worth running or not, reducing manifest load time.
Teams:
See also https://github.com/rust-lang/cargo/issues/14603
Merged in https://github.com/rust-lang/cargo/pull/14605
Teams:
See also https://github.com/rust-lang/cargo/issues/14395
Risks