Rust Testing

Problems

TODO: Hacky stdout capture

  • Easy to do something wrong with it (anstream)
  • Tests can't capture
  • Not able to reuse for other things like pagers or anstream

TODO: more test metrics

Pretty asserts

Rollup of results

Warning when no tests run

https://github.com/rust-lang/cargo/issues/1983
https://github.com/rust-lang/cargo/issues/8430

Inspirations

pytest

  • Reusable fixtures
    • Share expensive initialization
    • Test case generators make it eas ty plug in custom runners
    • Fixtures and tests can skip at runtime
    • Tests only run if their fixture passes, allowing for "smoke tests" to reduce the testing scope
    • Test marks for easy running of test subsets
  • --last-failed and --failed-first support
  • Brief output, sumarizing with a . per pass (f for failed)

Candidates

cargo-nextest

  • Replaces cargo test, doing its own coordination of the tests within the test binaries
  • Provides parallelism between binaries
  • Improves output
  • Offers CI features
  • But makes it even harder to have reusable fixtures
  • But still has link time issues
  • But doesn't help with custom runners

Cargo setting to link all test files together

  • Make default in new edition

Add new cargo-test/libtest protocol for using jobserver to run all test binaries at once

Replace libtest, modeled off of pytest

"libpytest"

  • Reusable fixtures, test generators, parameratize tests, etc
    • fixtures and tests can report alternative status (e.g. skip with reason)
    • Fixture cleanup can report failure
    • tmpdir fixture would report large tmpdirs
  • trybuild, trycmd can be test generators, tying into everything else
  • doctest test generator that replaces cargo test --doc
    • Can we compile everything into one binary?
  • Report slow tests
  • Brief output by default
  • Annotation for process isolation
  • Retry settings and annotations

In adition, provide a custom cargo test like criterion does for cargo bench

  • Exposed so it can be pulled in as an xtask and be run with workspace-specific versions?
  • Can we use jobserver to allow cargo pytest to run test binaries in parallel without exceding a max number of threads?
  • Partitioning support
  • JUnit XML support

Open questions

  • Could we code-generate the doctests in a way to build them directly into the test executable?
    • Offer the same service to trybuild
  • How do we ensure they are updated before the test run
    • "Self modifying code" approach would require failing and requiring them to re-run, even if the flag is set
    • Could we have a build.rs for tests?
    • Could we do this in cargo pytest?