# Rust Testing
Problems
- perf: Longer link times due to binary per test file
- See https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html
- perf: No test parallelism across binaries (and across the workspace), blocking until the last test in a binary is done
- No help in reusing expensive fixtures
- e.g. https://github.com/rust-lang/cargo/pull/11023/files#diff-9e3e31cc30eb728fcb75dcddf441dc127b0eb8cf764b4ce6cc260c3ec9487c26
- No easy way to re-run only the failed until they pass
- "fail-fast" by default
- With `--no-fail-fast`, failed summary would need to be at end to not be lost in noise
- No easy identification of slow tests
- Both to keep testing optimized
- And slow test slow down might be due to bugs, like in https://github.com/rust-lang/cargo/pull/11062
- Runtime skipping of tests (e.g. does "git" exist)
- https://internals.rust-lang.org/t/pre-rfc-skippable-tests/14611
- Report reason test was skipped
- Help with flaky tests
- Hard to find a specific test in list because order is based on completion
- See https://github.com/nextest-rs/nextest/issues/47 for an example
- Output could be polished
- Can easily be noisy (lots of scrolling, especially to find failed test output)
- Hard to tie in custom runners (e.g. trybuild, trycmd, etc)
- Easy to let `Drop` mask errors, like for files
- e.g. leaking a file handle which would have been caught via an error from `file.close().unwrap()`
- CI
- No test partitioning
- JUnit XML support
TODO: Hacky stdout capture
- Easy to do something wrong with it (anstream)
- Tests can't capture
- Not able to reuse for other things like pagers or anstream
TODO: more test metrics
- https://github.com/assert-rs/assert_cmd/issues/171
Pretty asserts
- https://github.com/rust-lang/rust/issues/44838
Rollup of results
- https://github.com/rust-lang/cargo/issues/8414
Warning when no tests run
- https://github.com/rust-lang/cargo/issues/11875
https://github.com/rust-lang/cargo/issues/1983
https://github.com/rust-lang/cargo/issues/8430
## Inspirations
pytest
- Reusable fixtures
- Share expensive initialization
- Test case generators make it eas ty plug in custom runners
- Fixtures and tests can skip at runtime
- Tests only run if their fixture passes, allowing for "smoke tests" to reduce the testing scope
- Test marks for easy running of test subsets
- [`--last-failed` and `--failed-first` support](https://docs.pytest.org/en/7.1.x/how-to/cache.html)
- Brief output, sumarizing with a `.` per pass (`f` for failed)
## Candidates
[cargo-nextest](https://nexte.st/index.html)
- Replaces `cargo test`, doing its own coordination of the tests within the test binaries
- Provides parallelism between binaries
- Improves output
- Offers CI features
- But makes it even harder to have reusable fixtures
- But still has link time issues
- But doesn't help with custom runners
Cargo setting to link all test files together
- Make default in new edition
Add new cargo-test/libtest protocol for using jobserver to run all test binaries at once
- See https://github.com/rust-lang/cargo/issues/5609
- Could also help with https://github.com/rust-lang/cargo/issues/11875
- See also
- https://github.com/rust-lang/cargo/issues/1983
- https://github.com/rust-lang/cargo/issues/2832
- https://github.com/rust-lang/cargo/issues/4324
- https://github.com/rust-lang/cargo/issues/6151
- https://github.com/rust-lang/cargo/issues/6266
Replace libtest, modeled off of pytest
## "libpytest"
- Reusable fixtures, test generators, parameratize tests, etc
- fixtures and tests can report alternative status (e.g. skip with reason)
- Fixture cleanup can report failure
- tmpdir fixture would report large tmpdirs
- trybuild, trycmd can be test generators, tying into everything else
- doctest test generator that replaces `cargo test --doc`
- Can we compile everything into one binary?
- Report slow tests
- Brief output by default
- Annotation for process isolation
- Retry settings and annotations
In adition, provide a custom `cargo test` like criterion does for `cargo bench`
- Exposed so it can be pulled in as an xtask and be run with workspace-specific versions?
- Can we use jobserver to allow `cargo pytest` to run test binaries in parallel without exceding a max number of threads?
- Partitioning support
- JUnit XML support
Open questions
- Could we code-generate the doctests in a way to build them directly into the test executable?
- Offer the same service to trybuild
- How do we ensure they are updated before the test run
- "Self modifying code" approach would require failing and requiring them to re-run, even if the flag is set
- Could we have a `build.rs` for tests?
- Could we do this in `cargo pytest`?