owned this note
owned this note
Published
Linked with GitHub
# Design Proposal for `-Zbuild-analysis`
The Cargo build analysis system extends the existing `--timings` functionality to provide persistent storage and enhanced reporting capabilities. See <https://rust-lang.github.io/rust-project-goals/2025h2/cargo-build-analysis.html> for more information.
Quote from the goal:
> ### The next 6 months
>
> Cargo will provide an opt-in unstable configuration option to collect the following data:
>
> * The existing metrics collected by the `--timings` flag
> * Rebuild reasons and related information
> * CLI arguments for each build
>
> Each data record will be associated with a build identifier,
> such as `CARGO_RUN_ID`,
> to make it possible to link related data within the same Cargo invocation.
>
> Two new unstable commands in `cargo report` will be introduced (command name TBD):
>
> * `cargo report rebuild-reasons`:
> Show which crates were rebuilt for a specific Cargo run and why.
> * `cargo report timing`:
> Display timing data for a specific build, including per-crate compile times.
>
> During the prototyping phase,
> the data may be stored as JSON blobs in a simple database format, such as SQLite.
> This approach allows schema evolution without committing to a stable format.
## Component overview
```mermaid
graph TB
A[cargo build]
A --> D[BuildLogger]
H[Cargo Configuration] --> D
subgraph "Log Messages"
M1[build-started]
M2[unit-timing-info]
M3[rebuild-reason]
end
D --> M1
D --> M2
D --> M3
M1 --> JSONL
M2 --> JSONL
M3 --> JSONL
JSONL --> F[cargo report commands]
```
## Design
### Configuration
Have a unstable config table `build.analysis` to configure relevant metric collection settings. For example:
```toml
[build.analysis]
enabled = true
auto-clean-frequency = "…"
```
Only `build.analysis.enabled` is in scope. When enabled, Cargo automatically collects all the metrics. We could break it into different type of metrics, but it is out of scope.
Note that the config should not be blocking any normal use of Cargo.
### Date sources
#### Timing data
Cargo already has timings reports. HTML format is stable, but the JSON format is not. We'd like to reuse the [existing JSON timing infrastructure](https://github.com/rust-lang/cargo/blob/02bcdd29836032dca3a3bb7c430cb3ba2e6aac36/src/cargo/util/machine_message.rs#L90-L100) and feed them into the local database. We should also consider the integration of [`-Zsection-timings`](https://github.com/rust-lang/cargo/issues/15817), the native timing info from rustc.
#### Rebuild reasons
Cargo nowadays writes fingerprint files (rebuild detection support files) to `target/debug/.fingerprint` for each `Unit` of work.That is
```
.fingerprint/
# Each package is in a separate directory.
# Note that different target kinds have different filename prefixes.
$pkgname-$META/
# Set of source filenames for this package.
dep-lib-$targetname
# Timestamp when this package was last built.
invoked.timestamp
# The fingerprint hash.
lib-$targetname
# Detailed information used for logging the reason why
# something is being recompiled.
lib-$targetname.json
# The console output from the compiler. This is cached
# so that warnings can be redisplayed for "fresh" units.
output-lib-$targetname
```
However, the serialized information is way sparse then what it really tracks in [fingerprints](https://doc.rust-lang.org/nightly/nightly-rustc/cargo/core/compiler/fingerprint/index.html). The `lib-$targetname.json` contains hashes that is not really meaningful to users. We'll need to serialize and attatch details from [`DirtyReason`](https://github.com/rust-lang/cargo/blob/02bcdd29836032dca3a3bb7c430cb3ba2e6aac36/src/cargo/core/compiler/fingerprint/dirty_reason.rs?plain=1#L11-L84) to give richer details.
### Storage Format
One of the goals is to persist build metrics automatically, so developers can analyze previous builds. We use JSONL (JSON Lines) format for storage, where each line is a self-contained JSON object. This format is:
* Simple to implement and debug
* Allows streaming writes without complex transaction handling
* Easy to parse and query with standard tools (jq, etc.)
* Schema evolution friendly during the unstable phase
* Industry standard for metrics and log-like files
#### Storage path
The storage path will be kept as implementation details. However, it needs to be put under a global location because:
* We'd like to persist data across `cargo clean`.
* We'd like to retrieve data for the same workspace even when users set `target-dir`/`build-dir`.
Log files are stored at `$CARGO_HOME/log/{run_id}.jsonl`, where `{run_id}` is a unique identifier for each build invocation in the format `{timestamp}-{hash}`. The timestamp is RFC 3339 format with colons and dots removed for filesystem safety (e.g., `20251024T194502773638Z`), and the hash is derived from the canonical workspace root path.
Each build invocation creates a new JSONL file, allowing multiple builds to be tracked independently.
#### Message Format
Each line in the JSONL file is a JSON object with this structure:
```json
{
"run_id": "20251024T194502773638Z-f891d525d52ecab3",
"reason": "build-started",
"timestamp": "2023-12-07T14:30:45.123456789Z",
// ... additional fields specific to the message type
}
```
**Core fields present in every message:**
- `run_id`: Unique identifier for the build session (format: `{timestamp}-{hash}`)
- `reason`: Message type identifier
- `timestamp`: RFC 3339 timestamp when the event was logged
All messages for a single build invocation share the same `run_id`, making it easy to correlate events.
#### When to write data
The implementation uses a background thread with a channel-based queue:
* The main thread sends log messages through a channel as events occur
* A dedicated background thread writes messages to the JSONL file to avoid blocking the main thread
* The file is flushed when the logger is dropped (at the end of the build)
* Serialization happens in the main thread to avoid allocations
Each build creates a separate JSONL file, so there's no lock contention between concurrent builds, though we still put a flock on each file just in case.
#### Message Design Approach
We use a minimal, summary-first approach that covers the most important post‑build queries while keeping overhead low. We could switch to event-driven logging if needed.
##### `build-started`
Start metadata emitted when compilation begins.
- `workspace_root`: Path to workspace root
- `target_dir`: Path to target directory
- `cwd`: Current working directory
- `profile`: Build profile (dev, release, etc.)
- `rustc_version`: Rust compiler version
- `rustc_version_verbose`: Verbose compiler version
- `host`: Host target triple
- `jobs`: Number of parallel jobs
##### `unit-timing`
Per-unit timing summary emitted when a unit completes. Actually the existing JSON timing schema can be reused: https://github.com/rust-lang/cargo/blob/4f15cc8882fb34bf70c945e9c8ae91d6c8c91757/src/cargo/util/machine_message.rs#L98-L107.
##### `rebuild-reason`
Why a unit was rebuilt, emitted as a separate message for granular analysis.
- `unit_id`: Unit that was rebuilt
- `package_id`: Package identifier
- `reason`: Detailed rebuild reason
##### `build-finished`
Final build summary emitted at completion.
- `success`: Boolean indicating build success
- `duration_secs`: Total build duration
- `total_units`: Number of units processed
- `fresh_units`: Number of cached units
- `dirty_units`: Number of rebuilt units
##### `dependency-resolved`
Aggregated dependency resolution timing and counts.
- `duration_secs`: Time spent resolving dependencies
- `package_count`: Number of packages resolved
### CLI commands
The command-line interface shown here is experimental. We might want to try something more expressive like [Buck query](https://buck.build/command/query.html) or [bazel query](https://bazel.build/query/language) in the future. See also the prior discussion:
* [#t-cargo > Query Interface for Cargo](https://rust-lang.zulipchat.com/#narrow/channel/246057-t-cargo/topic/Query.20Interface.20for.20Cargo/with/444689451)
* [#t-cargo > npm has a new query syntax](https://rust-lang.zulipchat.com/#narrow/channel/246057-t-cargo/topic/npm.20has.20a.20new.20query.20syntax/with/292402417)
The commands will parse JSONL files from `$CARGO_HOME/log/` to extract and display the requested information.
#### `cargo report timing`
```
Reports the Cargo build timing of previous builds
Usage: cargo report timing [OPTIONS]
Options:
--id <id> identifier of the report
--since <date> Show most recent reports after a specific date
--until <date> Show reports older than a specific date
--format <fmt> Report in JSON or in HTML format
```
The command will list available run IDs if multiple builds are within the time range. It parses JSONL files to extract timing information.
#### `cargo report rebuild-reasons`
The command would roughly looks like:
```
Reports why a package rebuilt
Usage: cargo report rebuild-reasons [OPTIONS]
Options:
--id <id> identifier of the report
--package <spec> package ID Spec of the package in query
--since <date> show most recent report than a specific date
--until <date> show reports older than a specific date
```
The command will list available run IDs if multiple builds are within the time range. It parses JSONL files to extract rebuild reason information.
## Performance considerations
* The JSONL format has minimal overhead. Writing JSON objects line-by-line should be fast
* Each build creates a separate file, avoiding lock contention
* Aggregation only when reading from log files via `cargo report`. Index files can be created on demand to speed up queries.
## Unresolved questions
* Metrics from nested `cargo` calls belongs to the top level cargo or themselves?
* JSONL schema evolution
* How should we version the JSONL schema during the unstable phase?
* Should we add a version field to each message?
* `cargo report X` commands
* Should `cargo report` recognize all JSONL schema versions?
* Should `cargo report X` be built-in or external?
* How should we efficiently query large JSONL files?
* File management
* Should we implement automatic cleanup of old log files?
* What should be the default retention policy (by age, count, or size)?
* The logger is hand-written for the MVP. We should consider some other battle-tested solutions like [tracing_appender](https://crates.io/crates/tracing_appender) or [metrique](https://crates.io/crates/metrique).
* Event-driven vs. summary-oriented logging considerations:
* Event-driven (start/finish or phase-specific events)
* 👍🏾 real-time visibility like for streaming analytics
* 👍🏾 replaybility (in some sense)
* 👎🏾 post-processing complexity
* 👎🏾 higher message volume
* Summary-oriented (emit finalized summaries at the end of a phase or the build)
* 👍🏾 simplicity
* 👍🏾 lower message volume
* 👍🏾 matches existing `--timings` behavior
* 👎🏾 lose real-time visibility
## Appendix
Other build systems references:
* `buck2 log`: https://buck2.build/docs/users/commands/log/
* pantsbuild stats: https://www.pantsbuild.org/stable/reference/subsystems/stats
* ninjalog: https://ninja-build.org/manual.html#ref_log