Benchmarks

Nextest's execution model generally leads to faster test runs than Cargo. How much faster depends on the specifics, but here are some general guidelines:

  • Larger workspaces will see a greater benefit. This is because larger workspaces have more crates, more test binaries, and more potential spots for bottlenecks.
  • Bottlenecks with "long pole" tests. Nextest excels in situations where there are bottlenecks in multiple test binaries: cargo test can only run them serially, while nextest can run those tests in parallel.
  • Build caching. Test runs are one component of end-to-end execution times. Speeding up the build by using sccache, the Rust Cache GitHub Action, or similar, will make test run times be a proportionally greater part of overall times.

Even if nextest doesn't result in faster test runs, you may find it useful for identifying test bottlenecks, for its user interface, or for its other features.

Results

ProjectRevisionTest countcargo test (s)nextest (s)Improvement
cruciblecb228c2b4835.141.523.38×
guppy2cc51b412716.422.802.29×
mdBook0079184c1993.851.662.31×
meilisearchbfb1f92772157.0428.991.96×
omicrone7949cd1619444.08202.502.19×
penumbra4ecd94cc144125.3890.961.37×
reqwest3459b8941135.572.262.48×
ring450ada2817913.129.401.39×
tokio1f50c571113824.2711.602.09×

Specifications

All measurements were done on:

  • Processor: AMD Ryzen 9 7950X x86_64, 16 cores/32 threads
  • Operating system: Pop_OS! 22.04 running Linux kernel 6.0.12
  • RAM: 64GB
  • Rust: version 1.66.0

The commands run were:

  • cargo test: cargo test --workspace --bins --lib --tests --examples --no-fail-fast (to exclude doctests since they're not supported by nextest)
  • nextest: cargo nextest run --workspace --bins --lib --tests --examples --no-fail-fast

The measurements do not include time taken to build the tests. To ensure that, each command was run 5 times in succession. The measurement recorded is the minimum of runs 3, 4 and 5.