Rust Testing
Problems
- perf: Longer link times due to binary per test file
- perf: No test parallelism across binaries (and across the workspace), blocking until the last test in a binary is done
- No help in reusing expensive fixtures
- No easy way to re-run only the failed until they pass
- "fail-fast" by default
- With
--no-fail-fast
, failed summary would need to be at end to not be lost in noise
- No easy identification of slow tests
- Runtime skipping of tests (e.g. does "git" exist)
- Help with flaky tests
- Hard to find a specific test in list because order is based on completion
- Output could be polished
- Can easily be noisy (lots of scrolling, especially to find failed test output)
- Hard to tie in custom runners (e.g. trybuild, trycmd, etc)
- Easy to let
Drop
mask errors, like for files
- e.g. leaking a file handle which would have been caught via an error from
file.close().unwrap()
- CI
- No test partitioning
- JUnit XML support
TODO: Hacky stdout capture
- Easy to do something wrong with it (anstream)
- Tests can't capture
- Not able to reuse for other things like pagers or anstream
TODO: more test metrics
Pretty asserts
Rollup of results
Warning when no tests run
https://github.com/rust-lang/cargo/issues/1983
https://github.com/rust-lang/cargo/issues/8430
Inspirations
pytest
- Reusable fixtures
- Share expensive initialization
- Test case generators make it eas ty plug in custom runners
- Fixtures and tests can skip at runtime
- Tests only run if their fixture passes, allowing for "smoke tests" to reduce the testing scope
- Test marks for easy running of test subsets
--last-failed
and --failed-first
support
- Brief output, sumarizing with a
.
per pass (f
for failed)
Candidates
cargo-nextest
- Replaces
cargo test
, doing its own coordination of the tests within the test binaries
- Provides parallelism between binaries
- Improves output
- Offers CI features
- But makes it even harder to have reusable fixtures
- But still has link time issues
- But doesn't help with custom runners
Cargo setting to link all test files together
- Make default in new edition
Add new cargo-test/libtest protocol for using jobserver to run all test binaries at once
Replace libtest, modeled off of pytest
"libpytest"
- Reusable fixtures, test generators, parameratize tests, etc
- fixtures and tests can report alternative status (e.g. skip with reason)
- Fixture cleanup can report failure
- tmpdir fixture would report large tmpdirs
- trybuild, trycmd can be test generators, tying into everything else
- doctest test generator that replaces
cargo test --doc
- Can we compile everything into one binary?
- Report slow tests
- Brief output by default
- Annotation for process isolation
- Retry settings and annotations
In adition, provide a custom cargo test
like criterion does for cargo bench
- Exposed so it can be pulled in as an xtask and be run with workspace-specific versions?
- Can we use jobserver to allow
cargo pytest
to run test binaries in parallel without exceding a max number of threads?
- Partitioning support
- JUnit XML support
Open questions
- Could we code-generate the doctests in a way to build them directly into the test executable?
- Offer the same service to trybuild
- How do we ensure they are updated before the test run
- "Self modifying code" approach would require failing and requiring them to re-run, even if the flag is set
- Could we have a
build.rs
for tests?
- Could we do this in
cargo pytest
?