Skip to content

Benchmarks

Nextest's execution model generally leads to faster test runs than Cargo. How much faster depends on the specifics, but here are some general guidelines:

  • Larger workspaces will see a greater benefit. This is because larger workspaces have more crates, more test binaries, and more potential spots for bottlenecks.
  • Bottlenecks with long-pole tests. Nextest excels in situations where there are bottlenecks in multiple test binaries: cargo test can only run them serially, while nextest can run those tests in parallel.
  • Build caching. Test runs are one component of end-to-end execution times. Speeding up the build by using sccache, the Rust Cache GitHub Action, or similar, will make test run times be a proportionally greater part of overall times.

Even if nextest doesn't result in faster test runs, you may find it useful for identifying test bottlenecks, for its user interface, or for its other features.

Results

Project Revision Test count cargo test (s) nextest (s) Improvement
crucible cb228c2b 483 5.14 1.52 3.38×
guppy 2cc51b41 271 6.42 2.80 2.29×
mdBook 0079184c 199 3.85 1.66 2.31×
meilisearch bfb1f927 721 57.04 28.99 1.96×
omicron e7949cd1 619 444.08 202.50 2.19×
penumbra 4ecd94cc 144 125.38 90.96 1.37×
reqwest 3459b894 113 5.57 2.26 2.48×
ring 450ada28 179 13.12 9.40 1.39×
tokio 1f50c571 1138 24.27 11.60 2.09×

Specifications

All measurements were done on:

  • Processor: AMD Ryzen 9 7950X x86_64, 16 cores/32 threads
  • Operating system: Pop_OS! 22.04 running Linux kernel 6.0.12
  • RAM: 64GB
  • Rust: version 1.66.0

The commands run were:

  • cargo test: cargo test --workspace --bins --lib --tests --examples --no-fail-fast (to exclude doctests since they're not supported by nextest)
  • nextest: cargo nextest run --workspace --bins --lib --tests --examples --no-fail-fast

The measurements do not include time taken to build the tests. To ensure that, each command was run 5 times in succession. The measurement recorded is the minimum of runs 3, 4 and 5.