cargo-nextest
Welcome to the home page for cargo-nextest, a next-generation test runner for Rust projects.
Features

- Clean, beautiful user interface. Nextest presents its results concisely so you can see which tests passed and failed at a glance.
- Up to 3× as fast as cargo test. Nextest uses a state-of-the-art execution model for faster, more reliable test runs.
- Identify slow and leaky tests. Use nextest to detect misbehaving tests, identify bottlenecks during test execution, and optionally terminate tests if they take too long.
- Filter tests using an embedded language. Use powerful filter expressions to specify granular subsets of tests on the command-line, and to enable per-test overrides.
- Configure per-test overrides. Automatically retry subsets of tests, mark them as heavy, or run them serially.
- Designed for CI. Nextest addresses real-world pain points in continuous integration scenarios:
- Use pre-built binaries for quick installation.
- Set up CI-specific configuration profiles.
- Reuse builds and partition test runs across multiple CI jobs. (Check out this example on GitHub Actions).
- Automatically retry failing tests, and mark them as flaky if they pass later.
- Print failing output at the end of test runs.
- Output information about test runs as JUnit XML.
- Cross-platform. Nextest works on Linux and other Unixes, Mac and Windows, so you get the benefits of faster test runs no matter what platform you use.
- ... and more coming soon!
Quick start
Install cargo-nextest for your platform using the pre-built binaries.
Run all tests in a workspace:
cargo nextest run
For more detailed installation instructions, see Installation.
Note: Doctests are currently not supported because of limitations in stable Rust. For now, run doctests in a separate step with
cargo test --doc
.
Crates in this project
Crate | crates.io | rustdoc (latest version) | rustdoc (main) |
---|---|---|---|
cargo-nextest, the main test binary | |||
nextest-runner, core nextest logic | |||
nextest-metadata, parsers for machine-readable output | |||
nextest-filtering, parser and evaluator for filter expressions | |||
quick-junit, JUnit XML serializer |
Contributing
The source code for nextest and this site are hosted on GitHub, at https://github.com/nextest-rs/nextest.
Contributions are welcome! Please see the CONTRIBUTING file for how to help out.
License
The source code for nextest is licensed under the MIT and Apache 2.0 licenses.
This document is licensed under CC BY 4.0. This means that you are welcome to share, adapt or modify this material as long as you give appropriate credit.
Installation and usage
cargo-nextest works on Linux and other Unix-like OSes, macOS, and Windows.
Installing pre-built binaries (recommended)
cargo-nextest is available as pre-built binaries. See Pre-built binaries for more information.
Installing from source
If pre-built binaries are not available for your platform, or you'd like to otherwise install cargo-nextest from source, see Installing from source for more information.
Windows antivirus and macOS Gatekeeper
For notes about platform-specific performance issues caused by anti-malware software on Windows and macOS, see Windows antivirus and macOS Gatekeeper.
Installing pre-built binaries
The quickest way to get going with nextest is to download a pre-built binary for your platform. The latest nextest release is available at:
- get.nexte.st/latest/linux for Linux x86_64, including Windows Subsystem for Linux (WSL)1
- get.nexte.st/latest/linux-arm for Linux aarch641
- get.nexte.st/latest/mac for macOS, both x86_64 and Apple Silicon
- get.nexte.st/latest/windows for Windows x86_64
Other platforms
Nextest's CI isn't run on these platforms -- these binaries most likely work but aren't guaranteed to do so.
- get.nexte.st/latest/linux-musl for Linux x86_64, with musl libc2
- get.nexte.st/latest/windows-x86 for Windows i686
- get.nexte.st/latest/freebsd for FreeBSD x86_64
- get.nexte.st/latest/illumos for illumos x86_64
These archives contain a single binary called cargo-nextest
(cargo-nextest.exe
on Windows). Add this binary to a location on your PATH.
The standard Linux binaries target glibc, and have a minimum requirement of glibc 2.27 (Ubuntu 18.04).
Rust targeting Linux with musl currently has a bug that Rust targeting Linux with glibc doesn't have. This bug means that nextest's linux-musl binary has slower test runs and is susceptible to signal-related races. Only use the linux-musl binary if the standard Linux binary doesn't work in your environment.
Downloading and installing from your terminal
The instructions below are suitable for both end users and CI. These links will stay stable.
NOTE: The instructions below assume that your Rust installation is managed via rustup. You can extract the archive to a different directory in your PATH if required.
If you'd like to stay on the 0.9 series to avoid breaking changes (see the stability policy for more), replace
latest
in the URL with0.9
.
Linux x86_64
curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
Linux aarch64
curl -LsSf https://get.nexte.st/latest/linux-arm | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
macOS (x86_64 and Apple Silicon)
curl -LsSf https://get.nexte.st/latest/mac | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
This will download a universal binary that works on both Intel and Apple Silicon Macs.
Windows x86_64 using PowerShell
Run this in PowerShell:
$tmp = New-TemporaryFile | Rename-Item -NewName { $_ -replace 'tmp$', 'zip' } -PassThru
Invoke-WebRequest -OutFile $tmp https://get.nexte.st/latest/windows
$outputDir = if ($Env:CARGO_HOME) { Join-Path $Env:CARGO_HOME "bin" } else { "~/.cargo/bin" }
$tmp | Expand-Archive -DestinationPath $outputDir -Force
$tmp | Remove-Item
Windows x86_64 using Unix tools
If you have access to a Unix shell, curl
and tar
natively on Windows (for example if you're using shell: bash
on GitHub Actions):
curl -LsSf https://get.nexte.st/latest/windows-tar | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
Note: Windows Subsystem for Linux (WSL) users should follow the Linux x86_64 instructions.
If you're a Windows expert who can come up with a better way to do this, please add a suggestion to this issue!
Other platforms
FreeBSD x86_64
curl -LsSf https://get.nexte.st/latest/freebsd | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
illumos x86_64
curl -LsSf https://get.nexte.st/latest/illumos | gunzip | tar xf - -C ${CARGO_HOME:-~/.cargo}/bin
As of 2022-12, the current version of illumos tar has a bug where tar zxf
doesn't work over standard input.
Using cargo-binstall
If you have cargo-binstall available, you can install nextest with:
cargo binstall cargo-nextest --secure
Community-maintained binaries
These binaries are maintained by the community—thank you!
Homebrew
To install nextest with Homebrew, on macOS or Linux:
brew install cargo-nextest
Arch Linux
On Arch Linux, install nextest with pacman by running:
pacman -S cargo-nextest
Using pre-built binaries in CI
Pre-built binaries can be used in continuous integration to speed up test runs.
Using nextest in GitHub Actions
The easiest way to install nextest in GitHub Actions is to use the Install Development Tools action maintained by Taiki Endo.
To install the latest version of nextest, add this to your job after installing Rust and Cargo:
- uses: taiki-e/install-action@nextest
See this in practice with nextest's own CI.
The action will download pre-built binaries from the URL above and add them to .cargo/bin
.
To install a version series or specific version, use this instead:
- uses: taiki-e/install-action@v2
with:
tool: nextest
## version (defaults to "latest") can be a series like 0.9:
# tool: nextest@0.9
## version can also be a specific version like 0.9.11:
# tool: nextest@0.9.11
Tip: GitHub Actions supports ANSI color codes. To get color support for nextest (and Cargo), add this to your workflow:
env: CARGO_TERM_COLOR: always
For a full list of environment variables supported by nextest, see Environment variables.
Other CI systems
Install pre-built binaries on other CI systems by downloading and extracting the respective archives, using the commands above as a guide. See Release URLs for more about how to specify nextest versions and platforms.
If you've made it easy to install nextest on another CI system, feel free to submit a pull request with documentation.
Release URLs
Binary releases of cargo-nextest will always be available at https://get.nexte.st/{version}/{platform}
.
{version}
identifier
The {version}
identifier is:
latest
for the latest release (not including pre-releases)- a version range, for example
0.9
, for the latest release in the 0.9 series (not including pre-releases) - the exact version number, for example
0.9.4
, for that specific version
{platform}
identifier
The {platform}
identifier is:
x86_64-unknown-linux-gnu.tar.gz
for x86_64 Linux (tar.gz)x86_64-unknown-linux-musl.tar.gz
for x86_64 Linux with musl (tar.gz, available for version 0.9.29+)aarch64-unknown-linux-gnu.tar.gz
for aarch64 Linux (tar.gz, available for version 0.9.29+)universal-apple-darwin.tar.gz
for x86_64 and arm64 macOS (tar.gz)x86_64-pc-windows-msvc.zip
for x86_64 Windows (zip)x86_64-pc-windows-msvc.tar.gz
for x86_64 Windows (tar.gz)i686-pc-windows-msvc.zip
for i686 Windows (zip)i686-pc-windows-msvc.tar.gz
for i686 Windows (tar.gz)x86_64-unknown-freebsd.tar.gz
for x86_64 FreeBSD (tar.gz)x86_64-unknown-illumos.tar.gz
for x86_64 illumos (tar.gz)
For convenience, the following shortcuts are defined:
linux
points tox86_64-unknown-linux-gnu.tar.gz
linux-musl
points tox86_64-unknown-linux-musl.tar.gz
linux-arm
points toaarch64-unknown-linux-gnu.tar.gz
mac
points touniversal-apple-darwin.tar.gz
windows
points tox86_64-pc-windows-msvc.zip
windows-tar
points tox86_64-pc-windows-msvc.tar.gz
windows-x86
points toi686-pc-windows-msvc.zip
windows-x86-tar
points toi686-pc-windows-msvc.tar.gz
freebsd
points tox86_64-unknown-freebsd.tar.gz
illumos
points tox86_64-unknown-illumos.tar.gz
Also, each release's canonical GitHub Releases URL is available at https://get.nexte.st/{version}/release
. For example, the latest GitHub release is available at get.nexte.st/latest/release.
Examples
The latest nextest release in the 0.9 series for macOS is available as a tar.gz file at get.nexte.st/0.9/mac.
Nextest version 0.9.11 for Windows is available as a zip file at get.nexte.st/0.9.11/windows, and as a tar.gz file at get.nexte.st/0.9.11/windows-tar.
Installing from source
If pre-built binaries are not available for your platform, or you'd otherwise like to install cargo-nextest from source, here's what you need to do:
Installing from crates.io
Run the following command:
cargo install cargo-nextest --locked
Note: A plain
cargo install cargo-nextest
without--locked
is not supported. If you run into build issues, please try with--locked
before reporting an issue.
cargo nextest
must be compiled and installed with Rust 1.70 or later (see Stability policy for more), but it can build and run
tests against any version of Rust.
Using a cached install in CI
Most CI users of nextest will benefit from using cached binaries. Consider using the pre-built binaries for this purpose.
See this example for how the nextest repository uses pre-built binaries..
If your CI is based on GitHub Actions, you may use the baptiste0928/cargo-install action to build cargo-nextest from source and cache the cargo-nextest binary.
jobs:
ci:
# ...
steps:
- uses: actions/checkout@v3
# Install a Rust toolchain here.
- name: Install cargo-nextest
uses: baptiste0928/cargo-install@v1
with:
crate: cargo-nextest
locked: true
# Uncomment the following line if you'd like to stay on the 0.9 series
# version: 0.9
# At this point, cargo-nextest will be available on your PATH
Also consider using the Swatinem/rust-cache action to make your builds faster.
Installing from GitHub
Install the latest, in-development version of cargo-nextest from the GitHub repository:
cargo install --git https://github.com/nextest-rs/nextest --bin cargo-nextest
Updating nextest
Starting version 0.9.19, cargo-nextest has update functionality built-in. Simply run cargo nextest self update
to check for and perform updates.
The nextest updater downloads and installs the latest version of the cargo-nextest binary from get.nexte.st.
To request a specific version, run (e.g.) cargo nextest self update --version 0.9.19
.
For older versions
If you're on cargo-nextest 0.9.18 or below, update by redownloading and reinstalling the binary following the instructions at Pre-built binaries.
For distributors
cargo-nextest
0.9.21 and above has a new default-no-update
feature, which will contain all default features except for self-update. The recommended, forward-compatible way to build cargo-nextest is with --no-default-features --features default-no-update
.
Windows antivirus and macOS Gatekeeper
This page covers common performance issues caused by anti-malware protections on Windows and macOS. These performance issues are not unique to nextest, but its execution model may exacerbate them.
Windows
Your antivirus software—typically Windows Security, also known as Microsoft Defender—might interfere with process execution, making your test runs significantly slower. For optimal performance, exclude the following directories from checks:
- The directory with all your code in it
- Your
.cargo\bin
directory, typically within your home directory (see this Rust issue).
Here's how to exclude directories from Windows Security.
macOS
Similar to Windows Security, macOS has a system called Gatekeeper which performs checks on binaries. Gatekeeper can cause nextest runs to be significantly slower. A typical sign of this happening is even the simplest of tests in cargo nextest run
taking more than 0.2 seconds.
Adding your terminal to Developer Tools will cause any processes run by it to be excluded from Gatekeeper. For optimal performance, add your terminal to Developer Tools. You may also need to run cargo clean
afterwards.
How to add your terminal to Developer Tools
- Run
sudo spctl developer-mode enable-terminal
in your terminal. - Go to System Preferences, and then to Security & Privacy.
- Under the Privacy tab, an item called
Developer Tools
should be present. Navigate to it. - Ensure that your terminal is listed and enabled. If you're using a third-party terminal like iTerm, be sure to add it to the list (You may have to click the lock in the bottom-left corner and authenticate).
- Restart your terminal.
See this comment on Hacker News for more.
There are still some reports of performance issues on macOS after Developer Tools have been enabled. If you're seeing this, please add a note to this issue!
Usage
This section covers usage, features and options for cargo-nextest.
Basic usage
To build and run all tests in a workspace, cd into the workspace and run:
cargo nextest run
For more information about running tests, see Running tests.
Limitations
-
The nextest execution model means that each individual test is executed as a separate process. Tests that depend on being executed within the same process may not work correctly.
To work around this, consider combining those tests into one so that nextest runs them as a unit, or excluding those tests from nextest.
-
There's no way to mark a particular test binary as excluded from nextest.
-
Doctests are currently not supported because of limitations in stable Rust. Locally and in CI, after
cargo nextest run
, usecargo test --doc
to run all doctests.
Running tests
To build and run all tests in a workspace1, cd into the workspace and run:
cargo nextest run
This will produce output that looks like:
In the output above:
-
Tests are marked
PASS
orFAIL
, and the amount of wall-clock time each test takes is listed within square brackets. In the example above,test_list_tests
passed and took 0.052 seconds to execute. -
Tests that take more than a specified amount of time are marked SLOW. The timeout is 60 seconds by default, and can be changed through configuration.
-
The part of the test in purple is the test binary. A test binary is either:
- a unit test binary built from tests inline within
lib.rs
. These test binaries are shown by nextest as just the crate name, without a::
separator inside them. - an integration test binary built from tests in the
[[test]]
section ofCargo.toml
(typically tests in thetests
directory.) These tests are shown by nextest in the formatcrate-name::bin-name
2.
For more about unit and integration tests, see the documentation for
cargo test
. - a unit test binary built from tests inline within
-
The part after the test binary is the test name, including the module the test is in. The final part of the test name is highlighted in bold blue text.
cargo nextest run
supports all the options that cargo test
does. For example, to only execute tests for a package called my-package
:
cargo nextest run -p my-package
For a full list of options accepted by cargo nextest run
, see cargo nextest run --help
.
bin
and example
targets can also contain tests. Those are represented as crate-name::bin/bin-name
and crate-name::example/example-name
, respectively.
Filtering tests
To only run tests that match certain names:
cargo nextest run <test-name1> <test-name2>...
This is different from cargo test
, where you have to specify a --
, for example: cargo test -- <test-name1> <test-name2>...
.
--skip
and --exact
Nextest does not support --skip
and --exact
directly; instead, it supports more powerful filter expressions which supersede these options.
Here are some examples:
Cargo test command | Nextest command |
---|---|
cargo test -- --skip skip1 --skip skip2 test3 | cargo nextest run -E 'test(test3) - test(/skip[12]/)' |
cargo test -- --exact test1 test2 | cargo nextest run -E 'test(=test1) + test(=test2)' |
Filtering by build platform
While cross-compiling code, some tests (e.g. proc-macro tests) may need to be run on the host platform. To filter tests based on the build platform they're for, nextest's filter expressions accept the platform()
set with values target
and host
.
For example, to only run tests for the host platform:
cargo nextest run -E 'platform(host)'
Displaying live test output
By default, cargo nextest run
will capture test output and only display it on failure. If you do not want to capture test output:
cargo nextest run --no-capture
In this mode, cargo-nextest will run tests serially so that output from different tests isn't interspersed. This is different from cargo test -- --nocapture
, which will run tests in parallel.
Doctests are currently not supported because of limitations in stable Rust. For now, run doctests in a separate step with cargo test --doc
.
Options and arguments
Build and run tests
Usage: cargo nextest run [OPTIONS] [FILTERS]... [-- <TEST-BINARY-ARGS>...]
Arguments:
[FILTERS]... Test name filter
[TEST-BINARY-ARGS]... Emulated cargo test binary arguments (partially supported)
Options:
--manifest-path <PATH> Path to Cargo.toml
-P, --profile <PROFILE> Nextest profile to use [env: NEXTEST_PROFILE=]
-v, --verbose Verbose output [env: NEXTEST_VERBOSE=]
--color <WHEN> Produce color output: auto, always, never [env: CARGO_TERM_COLOR=] [default: auto]
-h, --help Print help (see more with '--help')
Runner options:
--no-run Compile, but don't run tests
-j, --test-threads <THREADS> Number of tests to run simultaneously [possible values: integer or "num-cpus"] [default: from profile] [env: NEXTEST_TEST_THREADS=] [aliases: jobs]
--retries <RETRIES> Number of retries for failing tests [default: from profile] [env: NEXTEST_RETRIES=]
--fail-fast Cancel test run on the first failure
--no-fail-fast Run all tests regardless of failure
--no-capture Run tests serially and do not capture output
Cargo options:
--lib Test only this package's library unit tests
--bin <BIN> Test only the specified binary
--bins Test all binaries
--example <EXAMPLE> Test only the specified example
--examples Test all examples
--test <TEST> Test only the specified test target
--tests Test all targets
--bench <BENCH> Test only the specified bench target
--benches Test all benches
--all-targets Test all targets
-p, --package <PACKAGES> Package to test
--workspace Build all packages in the workspace
--exclude <EXCLUDE> Exclude packages from the test
--all Alias for workspace (deprecated)
-r, --release Build artifacts in release mode, with optimizations
--cargo-profile <NAME> Build artifacts with the specified Cargo profile
--build-jobs <JOBS> Number of build jobs to run
-F, --features <FEATURES> Space or comma separated list of features to activate
--all-features Activate all available features
--no-default-features Do not activate the `default` feature
--target <TRIPLE> Build for the target triple
--target-dir <DIR> Directory for all generated artifacts
--ignore-rust-version Ignore `rust-version` specification in packages
--unit-graph Output build graph in JSON (unstable)
--future-incompat-report Outputs a future incompatibility report at the end of the build
--frozen Require Cargo.lock and cache are up to date
--locked Require Cargo.lock is up to date
--offline Run without accessing the network
--config <KEY=VALUE> Override a configuration value
-Z <FLAG> Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details
Filter options:
--run-ignored <WHICH> Run ignored tests [possible values: default, ignored-only, all]
--partition <PARTITION> Test partition, e.g. hash:1/2 or count:2/3
-E, --filter-expr <EXPRESSION> Test filter expression (see
<https://nexte.st/book/filter-expressions>)
Reporter options:
--failure-output <WHEN> Output stdout and stderr on failure [env: NEXTEST_FAILURE_OUTPUT=] [possible values: immediate, immediate-final, final, never]
--success-output <WHEN> Output stdout and stderr on success [env: NEXTEST_SUCCESS_OUTPUT=] [possible values: immediate, immediate-final, final, never]
--status-level <LEVEL> Test statuses to output [env: NEXTEST_STATUS_LEVEL=] [possible values: none, fail, retry, slow, leak, pass, skip, all]
--final-status-level <LEVEL> Test statuses to output at the end of the run [env: NEXTEST_FINAL_STATUS_LEVEL=] [possible values: none, fail, flaky, slow, skip, pass, all]
--hide-progress-bar Do not display the progress bar [env: NEXTEST_HIDE_PROGRESS_BAR=]
Reuse build options:
--archive-file <PATH> Path to nextest archive
--archive-format <FORMAT> Archive format [default: auto] [possible values: auto, tar-zst]
--extract-to <DIR> Destination directory to extract archive to [default: temporary directory]
--extract-overwrite Overwrite files in destination directory while extracting archive
--persist-extract-tempdir Persist temporary directory destination is extracted to
--cargo-metadata <PATH> Path to cargo metadata JSON
--workspace-remap <PATH> Remapping for the workspace root
--binaries-metadata <PATH> Path to binaries-metadata JSON
--target-dir-remap <PATH> Remapping for the target directory
Config options:
--config-file <PATH> Config file [default: workspace-root/.config/nextest.toml]
--tool-config-file <TOOL:ABS_PATH> Tool-specific config files
Listing tests
To build and list all tests in a workspace1, cd into the workspace and run:
cargo nextest list
cargo nextest list
takes most of the same options that cargo nextest run
takes. For a full list of options accepted, see cargo nextest list --help
.
Doctests are currently not supported because of limitations in stable Rust. For now, run doctests in a separate step with cargo test --doc
.
Options and arguments
List tests in workspace
Usage: cargo nextest list [OPTIONS] [FILTERS]... [-- <TEST-BINARY-ARGS>...]
Arguments:
[FILTERS]... Test name filter
[TEST-BINARY-ARGS]... Emulated cargo test binary arguments (partially supported)
Options:
--manifest-path <PATH> Path to Cargo.toml
-v, --verbose Verbose output [env: NEXTEST_VERBOSE=]
--color <WHEN> Produce color output: auto, always, never [env: CARGO_TERM_COLOR=] [default: auto]
-h, --help Print help (see more with '--help')
Cargo options:
--lib Test only this package's library unit tests
--bin <BIN> Test only the specified binary
--bins Test all binaries
--example <EXAMPLE> Test only the specified example
--examples Test all examples
--test <TEST> Test only the specified test target
--tests Test all targets
--bench <BENCH> Test only the specified bench target
--benches Test all benches
--all-targets Test all targets
-p, --package <PACKAGES> Package to test
--workspace Build all packages in the workspace
--exclude <EXCLUDE> Exclude packages from the test
--all Alias for workspace (deprecated)
-r, --release Build artifacts in release mode, with optimizations
--cargo-profile <NAME> Build artifacts with the specified Cargo profile
--build-jobs <JOBS> Number of build jobs to run
-F, --features <FEATURES> Space or comma separated list of features to activate
--all-features Activate all available features
--no-default-features Do not activate the `default` feature
--target <TRIPLE> Build for the target triple
--target-dir <DIR> Directory for all generated artifacts
--ignore-rust-version Ignore `rust-version` specification in packages
--unit-graph Output build graph in JSON (unstable)
--future-incompat-report Outputs a future incompatibility report at the end of the build
--frozen Require Cargo.lock and cache are up to date
--locked Require Cargo.lock is up to date
--offline Run without accessing the network
--config <KEY=VALUE> Override a configuration value
-Z <FLAG> Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details
Filter options:
--run-ignored <WHICH> Run ignored tests [possible values: default, ignored-only, all]
--partition <PARTITION> Test partition, e.g. hash:1/2 or count:2/3
-E, --filter-expr <EXPRESSION> Test filter expression (see
<https://nexte.st/book/filter-expressions>)
Output options:
-T, --message-format <FMT> Output format [default: human] [possible values: human, json, json-pretty]
--list-type <TYPE> Type of listing [default: full] [possible values: full, binaries-only]
Reuse build options:
--archive-file <PATH> Path to nextest archive
--archive-format <FORMAT> Archive format [default: auto] [possible values: auto, tar-zst]
--extract-to <DIR> Destination directory to extract archive to [default: temporary directory]
--extract-overwrite Overwrite files in destination directory while extracting archive
--persist-extract-tempdir Persist temporary directory destination is extracted to
--cargo-metadata <PATH> Path to cargo metadata JSON
--workspace-remap <PATH> Remapping for the workspace root
--binaries-metadata <PATH> Path to binaries-metadata JSON
--target-dir-remap <PATH> Remapping for the target directory
Config options:
--config-file <PATH> Config file [default: workspace-root/.config/nextest.toml]
--tool-config-file <TOOL:ABS_PATH> Tool-specific config files
Retries and flaky tests
Sometimes, tests fail nondeterministically, which can be quite annoying to developers locally and in CI. cargo-nextest supports retrying failed tests with the --retries
option. If a test succeeds during a retry, the test is marked flaky. Here's an example:
--retries 2
means that the test is retried twice, for a total of three attempts. In this case, the test fails on the first try but succeeds on the second try. The TRY 2 PASS
text means that the test passed on the second try.
Flaky tests are treated as ultimately successful. If there are no other tests that failed, the exit code for the test run is 0.
Retries can also be:
- passed in via the environment variable
NEXTEST_RETRIES
. - configured in
.config/nextest.toml
.
For the order that configuration parameters are resolved in, see Hierarchical configuration.
Delays and backoff
In some situations, you may wish to add delays between retries. For example, if your test hits a network service which is rate limited.
In those cases, you can insert delays between test attempts with a backoff algorithm.
Note: Delays and backoff can only be specified through configuration. Passing in
--retries
via the command line or specifying theNEXTEST_RETRIES
environment variable will override delays and backoff specified through configuration.
Fixed backoff
To insert a constant delay between test attempts, use the fixed backoff algorithm. For example, to retry tests up to twice with a 1 second delay between attempts, use:
[profile.default]
retries = { backoff = "fixed", count = 2, delay = "1s" }
Exponential backoff
Nextest also supports exponential backoff, where the delay between attempts doubles each time. For example, to retry tests up to 3 times with successive delays of 5 seconds, 10 seconds, and 20 seconds, use:
[profile.default]
retries = { backoff = "exponential", count = 3, delay = "5s" }
A maximum delay can also be specified to avoid delays from becoming too large. In the above example, if count = 5
, the fourth and fifth retries would be with delays of 40 seconds and 80 seconds, respectively. To clamp delays at 30 seconds, use:
[profile.default]
retries = { backoff = "exponential", count = 3, delay = "5s", max-delay = "30s" }
This effectively performs a truncated exponential backoff.
Adding jitter
To avoid thundering herd problems, it can be useful to add randomness to delays. To do so, use jitter = true
.
[profile.default]
retries = { backoff = "exponential", count = 3, delay = "1s", jitter = true }
jitter = true
also works for fixed backoff.
The current jitter algorithm picks a value in between 0.5 * delay
and delay
uniformly at random. This is not part of the stable interface and is subject to change.
Per-test overrides
Nextest supports per-test overrides for retries, letting you mark a subset of tests as needing retries. For example, to mark test names containing "test_e2e"
as requiring retries:
[[profile.default.overrides]]
filter = 'test(test_e2e)'
retries = 2
Per-test overrides support the full set of delay and backoff options as well. For example:
[[profile.default.overrides]]
filter = 'test(test_remote_api)'
retries = { backoff = "exponential", count = 2, delay = "5s", jitter = true }
Note: The
--retries
command-line option and theNEXTEST_RETRIES
environment variable both disable overrides.
JUnit support
Flaky test detection is integrated with nextest's JUnit support. For more information, see JUnit support.
Slow tests and timeouts
Slow tests can bottleneck your test run. Nextest identifies tests that take more than a certain amount of time, and optionally lets you terminate tests that take too long to run.
Slow tests
For tests that take more than a certain amount of time (by default 60 seconds), nextest prints out a SLOW status. For example, in the output below, test_slow_timeout
takes 90 seconds to execute and is marked as a slow test.
Starting 6 tests across 8 binaries (19 skipped) PASS [ 0.001s] nextest-tests::basic test_success PASS [ 0.001s] nextest-tests::basic test_success_should_panic PASS [ 0.001s] nextest-tests::other other_test_success PASS [ 0.001s] nextest-tests tests::unit_test_success PASS [ 1.501s] nextest-tests::basic test_slow_timeout_2 SLOW [> 60.000s] nextest-tests::basic test_slow_timeout PASS [ 90.001s] nextest-tests::basic test_slow_timeout ------------ Summary [ 90.002s] 6 tests run: 6 passed (1 slow), 19 skipped
Configuring timeouts
To customize how long it takes before a test is marked slow, you can use the slow-timeout
configuration parameter. For example, to set a timeout of 2 minutes before a test is marked slow, add this to .config/nextest.toml
:
[profile.default]
slow-timeout = "2m"
Nextest uses the humantime
parser: see its documentation for the full supported syntax.
Terminating tests after a timeout
Nextest lets you optionally specify a timeout after which a test is terminated. For example, to configure a slow timeout of 60 seconds and for tests to be terminated after 3 minutes, add this to .config/nextest.toml
:
[profile.default]
slow-timeout = { period = "60s", terminate-after = 3 }
terminate-after
indicates the number of slow-timeout periods after which the test is terminated.
The run below is configured with:
slow-timeout = { period = "1s", terminate-after = 2 }
Starting 5 tests across 8 binaries (20 skipped) PASS [ 0.001s] nextest-tests::basic test_success PASS [ 0.001s] nextest-tests::basic test_success_should_panic PASS [ 0.001s] nextest-tests::other other_test_success PASS [ 0.001s] nextest-tests tests::unit_test_success SLOW [> 1.000s] nextest-tests::basic test_slow_timeout SLOW [> 2.000s] nextest-tests::basic test_slow_timeout TIMEOUT [ 2.001s] nextest-tests::basic test_slow_timeout --- STDOUT: nextest-tests::basic test_slow_timeout --- running 1 test ------------ Summary [ 2.001s] 5 tests run: 4 passed, 1 timed out, 20 skipped
How nextest terminates tests
On Unix platforms, nextest creates a process group for each test. On timing out, nextest attempts a graceful shutdown: it first sends the SIGTERM signal to the process group, then waits for a grace period (by default 10 seconds) for the test to shut down. If the test doesn't shut itself down within that time, nextest sends SIGKILL (kill -9
) to the process group to terminate it immediately.
To customize the grace period, use the slow-timeout.grace-period
configuration setting. For example, with the ci
profile, to terminate tests after 5 minutes with a grace period of 30 seconds:
[profile.ci]
slow-timeout = { period = "60s", terminate-after = 5, grace-period = "30s" }
To send SIGKILL to a process immediately, without a grace period, set slow-timeout.grace-period
to zero:
[profile.ci]
slow-timeout = { period = "60s", terminate-after = 5, grace-period = "0s" }
On other platforms including Windows, nextest terminates the test immediately in a manner akin to SIGKILL. (On Windows, nextest uses job objects to kill the test process and all its descendants.) The slow-timeout.grace-period
configuration setting is ignored.
Per-test overrides
Nextest supports per-test overrides for the slow-timeout and terminate-after settings.
For example, some end-to-end tests might take longer to run and sometimes get stuck. For tests containing the substring test_e2e
, to configure a slow timeout of 120 seconds, and to terminate tests after 10 minutes:
[[profile.default.overrides]]
filter = 'test(test_e2e)'
slow-timeout = { period = "120s", terminate-after = 5 }
See Override precedence for more about the order in which overrides are evaluated.
Leaky tests
Some tests create subprocesses but may not clean them up properly. Typical scenarios include:
- A test creates a server process to test against, but does not shut it down at the end of the test.
- A test starts a subprocess with the intent to shut it down, but panics, and does not use the RAII pattern to clean up subprocesses.
- Note that
std::process::Child
does not kill subprocesses on being dropped. Some alternatives, such astokio::process::Command
, can be configured to do so.
- Note that
- This can happen transitively as well: a test creates a process which creates its own subprocess, and so on.
Nextest can detect some, but not all, such situations. If nextest detects a subprocess leak, it marks the corresponding test as leaky.
Leaky tests nextest detects
Currently, nextest is limited to detecting subprocesses that inherit standard output or standard error from the test. For example, here's a test that nextest will mark as leaky.
#![allow(unused)] fn main() { #[test] fn test_subprocess_doesnt_exit() { let mut cmd = std::process::Command::new("sleep"); cmd.arg("120"); cmd.spawn().unwrap(); } }
For this test, nextest will output something like:
Starting 1 tests across 8 binaries (24 skipped) LEAK [ 0.103s] nextest-tests::basic test_subprocess_doesnt_exit ------------ Summary [ 0.103s] 1 tests run: 1 passed (1 leaky), 24 skipped
Leaky tests that are otherwise successful are considered to have passed.
Leaky tests that nextest currently does not detect
Tests which spawn subprocesses that do not inherit either standard output or standard error are not currently detected by nextest. For example, the following test is not currently detected as leaky:
#![allow(unused)] fn main() { #[test] fn test_subprocess_doesnt_exit_2() { let mut cmd = std::process::Command::new("sleep"); cmd.arg("120") .stdout(std::process::Stdio::null()) .stderr(std::process::Stdio::null()); cmd.spawn().unwrap(); } }
Detecting such tests is a very difficult problem to solve, particularly on Unix platforms.
Note: This section is not part of nextest's stability guarantees. In the future, these tests might get marked as leaky by nextest.
Configuring the leak timeout
Nextest waits a specified amount of time (by default 100 milliseconds) after the test exits for standard output and standard error to be closed. In rare cases, you may need to configure the leak timeout.
To do so, use the leak-timeout
configuration parameter. For example, to wait up to 500 milliseconds after the test exits, add this to .config/nextest.toml
:
[profile.default]
leak-timeout = "500ms"
Nextest also supports per-test overrides for the leak timeout.
Filter expressions
Nextest supports a domain-specific language (DSL) for filtering tests. The DSL is inspired by, and is similar to, Bazel query and Mercurial revsets.
Example: Running all tests in a crate and its dependencies
To run all tests in my-crate
and its dependencies, run:
cargo nextest run -E 'deps(my-crate)'
The argument passed into the -E
command-line option is called a filter expression. The rest of this page describes the full syntax for the expression DSL.
The filter expression DSL
A filter expression defines a set of tests. A test will be run if it matches a filter expression.
On the command line, multiple filter expressions can be passed in. A test will be run if it matches any of these expressions. For example, to run tests whose names contain the string my_test
as well as all tests in package my-crate
, run:
cargo nextest run -E 'test(my_test)' -E 'package(my-crate)'
This is equivalent to:
cargo nextest run -E 'test(my_test) + package(my-crate)'
Examples
package(serde) and test(deserialize)
: every test containing the stringdeserialize
in the packageserde
not (test(/parse[0-9]*/) | test(run))
: every test name not matching the regexparse[0-9]*
or the substringrun
Note: If you pass in both a filter expression and a standard, substring-based filter, tests must match both filter expressions and substring-based filters.
For example, the command:
cargo nextest run -E 'package(foo)' test_bar test_baz
will run all tests that are both in package
foo
and matchtest_bar
ortest_baz
.
DSL reference
This section contains the full set of operators supported by the DSL.
Basic predicates
all()
: include all tests.test(name-matcher)
: include all tests matchingname-matcher
.package(name-matcher)
: include all tests in packages (crates) matchingname-matcher
.deps(name-matcher)
: include all tests in crates matchingname-matcher
, and all of their (possibly transitive) dependencies.rdeps(name-matcher)
: include all tests in crates matchingname-matcher
, and all the crates that (possibly transitively) depend onname-matcher
.kind(name-matcher)
: include all tests in binary kinds matchingname-matcher
. Binary kinds include:lib
for unit tests, typically in thesrc/
directorytest
for integration tests, typically in thetests/
directorybench
for benchmark testsbin
for tests within[[bin]]
targetsproc-macro
for tests in thesrc/
directory of a procedural macro
binary(name-matcher)
: include all tests in binary names matchingname-matcher
.- For tests of kind
lib
andproc-macro
, the binary name is the same as the name of the crate. - Otherwise, it's the name of the integration test, benchmark, or binary target.
- For tests of kind
platform(host)
orplatform(target)
: include all tests that are built for the host or target platform, respectively.none()
: include no tests.
Note: If a filter expression always excludes a particular binary, it will not be run, even to get the list of tests within it. This means that a command like:
cargo nextest list -E 'platform(host)'
will not execute any test binaries built for the target platform. This is generally what you want, but if you would like to list tests anyway, include a
test()
predicate. For example, to list test binaries for the target platform (using, for example, a target runner), but skip running them:cargo nextest list -E 'platform(host) + not test(/.*/)' --verbose
Name matchers
~string
: match a package or test name containingstring
=string
: match a package or test name that's equal tostring
/regex/
: match a package or test name if any part of it matches the regular expressionregex
. To match the entire string against a regular expression, use/^regex$/
. The implementation uses the regex crate.string
: default matching strategy.- For tests (
test()
), this is equivalent to~string
. - For packages (
package()
,deps()
andrdeps()
), binary kinds (kind()
), andplatform()
, this is equivalent to=string
.
- For tests (
If you're constructing an expression string programmatically, it is recommended that you always use a prefix to avoid ambiguity.
Escape sequences
The ~string
and =string
name matchers can contain escape sequences, preceded by a backslash (\
).
\n
: line feed\r
: carriage return\t
: tab\\
: backslash\/
: forward slash\)
: closing parenthesis\,
: comma\u{7FFF}
: 24-bit Unicode character code (up to 6 hex digits)
All other escape sequences are invalid.
The regular expression matcher supports the same escape sequences that the regex crate does. This includes character classes like \d
. Additionally, \/
is interpreted as an escaped /
.
Operators
set_1 & set_2
,set_1 and set_2
: the intersection ofset_1
andset_2
set_1 | set_2
,set_1 + set_2
,set_1 or set_2
: the union ofset_1
orset_2
not set
,!set
: include everything not included inset
set_1 - set_2
: equivalent toset_1 and not set_2
(set)
: include everything inset
Operator precedence
In order from highest to lowest, or in other words from tightest to loosest binding:
()
not
,!
and
,&
,-
or
,|
,+
Within a precedence group, operators bind from left to right.
Examples
test(a) & test(b) | test(c)
is equivalent to(test(a) & test(b)) | test(c)
.test(a) | test(b) & test(c)
is equivalent totest(a) | (test(b) & test(c))
.test(a) & test(b) - test(c)
is equivalent to(test(a) & test(b)) - test(c)
.not test(a) | test(b)
is equivalent to(not test(a)) | test(b)
.
Archiving and reusing builds
In some cases, it can be useful to separate out building tests from running them. Nextest supports archiving builds on one machine, and then extracting the archive to run tests on another machine.
Terms
- Build machine: The computer that builds tests.
- Target machine: The computer that runs tests.
Use cases
- Cross-compilation. The build machine has a different architecture, or runs a different operating system, from the target machine.
- Test partitioning. Build once on the build machine, then partition test execution across multiple target machines.
- Saving execution time on more valuable machines. For example, build tests on a regular machine, then run them on a machine with a GPU attached to it.
Requirements
- The project source must be checked out to the same revision on the target machine. This might be needed for test fixtures and other assets, and nextest sets the right working directory relative to the workspace root when executing tests.
- It is your responsibility to transfer over the archive. Use the examples below as a template.
- Nextest must be installed on the target machine. For best results, use the same version of nextest on both machines.
Non-requirements
- Cargo does not need to be installed on the target machine. If
cargo
is unavailable, replacecargo nextest
withcargo-nextest nextest
in the following examples.
Creating archives
cargo nextest archive --archive-file <name-of-archive.tar.zst>
creates an archive with the following contents:
- Cargo-related metadata, at the location
target/nextest/cargo-metadata.json
. - Metadata about test binaries, at the location
target/nextest/binaries-metadata.json
. - All test binaries
- Other relevant files:
- Dynamic libraries that test binaries might link to
- Non-test binaries used by integration tests
Note that archives do not include the source code for your project. It is your responsibility to ensure that the source code for your workspace is transferred over to the target machine and has the same contents.
Currently, the only format supported is a Zstandard-compressed tarball (.tar.zst
).
Running tests from archives
cargo nextest list
and run
support a new --archive-file
option. This option accepts archives created by cargo nextest archive
as above.
By default, archives are extracted to a temporary directory, and nextest remaps paths to use the new
target directory. To specify the directory archives should be extracted to, use the --extract-to
option.
Specifying a new location for the source code
By default, nextest expects the workspace's source code to be in the same location on both the build and target machines. To specify a new location for the workspace, use the --workspace-remap <path-to-workspace-root>
option with the list
or run
commands.
Example: Simple build/run split
-
Build and archive tests:
Here you should specify all the options you would normally use to build your tests.
cargo nextest archive --workspace --all-features --archive-file my-archive.tar.zst
-
Run the tests:
Archive keeps the options used to build the tests, so you should not specify them again.
cargo nextest run --archive-file my-archive.tar.zst
Example: Use in GitHub Actions
See this working example for how to reuse builds and partition test runs on GitHub Actions.
Example: Cross-compilation
While cross-compiling code, some tests may need to be run on the host platform. (See the note about Filtering by build platform for more.)
On the build machine
-
Build and run host-only tests:
cargo nextest run --target <TARGET> -E 'platform(host)'
-
Archive tests:
cargo nextest archive --target <TARGET> --archive-file my-archive.tar.zst
-
Copy
my-archive.tar.zst
to the target machine.
On the target machine
-
Check out the project repository to a path
<REPO-PATH>
, to the same revision as the build machine. -
List target-only tests:
cargo nextest list -E 'platform(target)' \ --archive-file my-archive.tar.zst \ --workspace-remap <REPO-PATH>
-
Run target-only tests:
cargo nextest run -E 'platform(target)' \ --archive-file my-archive.tar.zst \ --workspace-remap <REPO-PATH>
Manually creating your own archives
You can also create and manage your own archives, with the following options to cargo nextest list
and run
:
--binaries-metadata
: The path to JSON metadata generated bycargo nextest list --list-type binaries-only --message-format json
.--target-dir-remap
: A possible new location for the target directory. Requires--binaries-metadata
.--cargo-metadata
: The path to JSON metadata generated bycargo metadata --format-version 1
.
Making tests relocatable
Some tests may need to be modified to handle changes in the workspace and target directories. Some common situations:
-
To obtain the path to the source directory, Cargo provides the
CARGO_MANIFEST_DIR
option at both build time and runtime. For relocatable tests, use the value ofCARGO_MANIFEST_DIR
at runtime. This meansstd::env::var("CARGO_MANIFEST_DIR")
, notenv!("CARGO_MANIFEST_DIR")
.If the workspace is remapped, nextest automatically sets
CARGO_MANIFEST_DIR
to the new location. -
To obtain the path to a crate's executables, Cargo provides the
CARGO_BIN_EXE_<name>
option to integration tests at build time. To handle target directory remapping, use the value ofNEXTEST_BIN_EXE_<name>
at runtime.To retain compatibility with
cargo test
, you can fall back to the value ofCARGO_BIN_EXE_<name>
at build time.
Options and arguments for cargo nextest archive
Build and archive tests
Usage: cargo nextest archive [OPTIONS] --archive-file <PATH>
Options:
--manifest-path <PATH> Path to Cargo.toml
-v, --verbose Verbose output [env: NEXTEST_VERBOSE=]
--color <WHEN> Produce color output: auto, always, never [env: CARGO_TERM_COLOR=] [default: auto]
-h, --help Print help (see more with '--help')
Cargo options:
--lib Test only this package's library unit tests
--bin <BIN> Test only the specified binary
--bins Test all binaries
--example <EXAMPLE> Test only the specified example
--examples Test all examples
--test <TEST> Test only the specified test target
--tests Test all targets
--bench <BENCH> Test only the specified bench target
--benches Test all benches
--all-targets Test all targets
-p, --package <PACKAGES> Package to test
--workspace Build all packages in the workspace
--exclude <EXCLUDE> Exclude packages from the test
--all Alias for workspace (deprecated)
-r, --release Build artifacts in release mode, with optimizations
--cargo-profile <NAME> Build artifacts with the specified Cargo profile
--build-jobs <JOBS> Number of build jobs to run
-F, --features <FEATURES> Space or comma separated list of features to activate
--all-features Activate all available features
--no-default-features Do not activate the `default` feature
--target <TRIPLE> Build for the target triple
--target-dir <DIR> Directory for all generated artifacts
--ignore-rust-version Ignore `rust-version` specification in packages
--unit-graph Output build graph in JSON (unstable)
--future-incompat-report Outputs a future incompatibility report at the end of the build
--frozen Require Cargo.lock and cache are up to date
--locked Require Cargo.lock is up to date
--offline Run without accessing the network
--config <KEY=VALUE> Override a configuration value
-Z <FLAG> Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details
Archive options:
--archive-file <PATH> File to write archive to
--archive-format <FORMAT> Archive format [default: auto] [possible values: auto, tar-zst]
--zstd-level <LEVEL> Zstandard compression level (-7 to 22, higher is more compressed + slower) [default: 0]
Config options:
--config-file <PATH> Config file [default: workspace-root/.config/nextest.toml]
--tool-config-file <TOOL:ABS_PATH> Tool-specific config files
Partitioning test runs in CI
For CI scenarios where test runs take too long on a single machine, nextest supports automatically partitioning or sharding tests into buckets, using the --partition
option.
cargo-nextest supports two kinds of partitioning: counted and hashed.
Counted partitioning
Counted partitioning is specified with --partition count:m/n
, where m and n are both integers, and 1 ≤ m ≤ n. Specifying this operator means "run tests in count-based bucket m of n".
Here's an example of running tests in bucket 1 of 2:
Tests not in the current bucket are marked skipped.
Counted partitioning is done per test binary. This means that the tests in one binary do not influence counting for other binaries.
Counted partitioning also applies after all other test filters. For example, if you specify cargo nextest run --partition count:1/3 test_parsing
, nextest first selects tests that match the substring test_parsing
, then buckets this subset of tests into 3 partitions and runs the tests in partition 1.
Hashed sharding
Hashed sharding is specified with --partition hash:m/n
, where m and n are both integers, and 1 ≤ m ≤ n. Specifying this operator means "run tests in hashed bucket m of n".
The main benefit of hashed sharding is that it is completely deterministic (the hash is based on a combination of the binary and test names). Unlike with counted partitioning, adding or removing tests, or changing test filters, will never cause a test to fall into a different bucket. The hash algorithm is guaranteed never to change within a nextest version series.
For sufficiently large numbers of tests, hashed sharding produces roughly the same number of tests per bucket. However, smaller test runs may result in an uneven distribution.
Reusing builds
By default, each job has to do its own build before starting a test run. To save on the extra work, nextest supports archiving builds in one job for later reuse in other jobs. See the example below for how to do this.
Example: Use in GitHub Actions
See this working example for how to reuse builds and partition test runs on GitHub Actions.
Example: Use in GitLab CI
GitLab can parallelize jobs across runners. This works neatly with --partition
. For example:
test:
stage: test
parallel: 3
script:
- echo "Node index - ${CI_NODE_INDEX}. Total amount - ${CI_NODE_TOTAL}"
- time cargo nextest run --workspace --partition count:${CI_NODE_INDEX}/${CI_NODE_TOTAL}
This creates three jobs that run in parallel: test 1/3
, test 2/3
and test 3/3
.
Target runners
If you're cross-compiling Rust code, you may wish to run tests through a wrapper executable or script. For this purpose, nextest supports target runners, using the same configuration options used by Cargo:
- The environment variable
CARGO_TARGET_<triple>_RUNNER
, if it matches the target platform, takes highest precedence. - Otherwise, nextest reads the
target.<triple>.runner
andtarget.<cfg>.runner
settings from.cargo/config.toml
.
Example
If you're on Linux cross-compiling to Windows, you can choose to run tests through Wine.
If you add the following to .cargo/config.toml:
[target.x86_64-pc-windows-msvc]
runner = "wine"
Or, in your shell:
export CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_RUNNER=wine
Then, running this command will cause your tests to be run as wine <test-binary>
:
cargo nextest run --target x86_64-pc-windows-msvc
Note: If your target runner is a shell script, it might malfunction on macOS due to System Integrity Protection's environment sanitization. Nextest provides the
NEXTEST_LD_*
andNEXTEST_DYLD_*
environment variables as workarounds: see Environment variables nextest sets for more.
Cross-compiling
While cross-compiling code, some tests may need to be run on the host platform. (See the note about Filtering by build platform for more.)
For tests that run on the host platform, nextest uses the target runner defined for the host. For example, if cross-compiling from x86_64-unknown-linux-gnu
to x86_64-pc-windows-msvc
, nextest will use the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER
for proc-macro and other host-only tests, and CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_RUNNER
for other tests.
This behavior is similar to that of per-test overrides.
Debugging output
Nextest invokes target runners during both the list and run phases. During the list phase, nextest has stringent rules for the contents of standard output.
If a target runner produces debugging or any other kind of output, it MUST NOT go to standard output. You can produce output to standard error, to a file on disk, etc.
For example, this target runner will not work:
#!/bin/bash
echo "This is some debugging output"
$@
Instead, redirect debugging output to standard error:
#!/bin/bash
echo "This is some debugging output" >&2
$@
Other options
Some other options accepted by cargo nextest run
. Many of these options are also accepted as configuration settings and environment variables.
Runner options
--no-fail-fast
: do not exit the test run on the first failure. Most useful for CI scenarios.-j, --test-threads
: number of tests to run simultaneously. Note that this is separate from the number of build jobs to run simultaneously, which is specified by--build-jobs
.--run-ignored ignored-only
runs ignored tests, while--run-ignored all
runs both ignored and non-ignored tests.
Reporter options
--success-output
and --failure-output
These options control when standard output and standard error are displayed for failing and passing tests, respectively. The possible values are:
immediate
: display output as soon as the test fails. Default for--failure-output
.final
: display output at the end of the test run.immediate-final
: display output as soon as the test fails, and at the end of the run. This is most useful for CI jobs.never
: never display output. Default for--success-output
.
These options can also be configured via global configuration and per-test overrides. Specifying these options over the command line will override configuration settings.
--status-level
and --final-status-level
--status-level
: which test statuses (PASS, FAIL etc) to display. There are 7 status levels:none, fail, retry, slow, pass, skip, all
. Each status level causes all earlier status levels to be displayed as well (similar to log levels). (For example, settingstatus-level
toskip
will show failing, retried, slow and passing tests along with skipped tests.) The default ispass
.--final-status-level
: which test statuses to display at the end of a test run. For example, this can be set tofail
to print out a list of failing tests at the end of a test run. The default isnone
.
For a full list of options, see Options and arguments.
Machine-readable output
cargo-nextest can be configured to produce machine-readable JSON output, readable by other programs. The nextest-metadata crate provides a Rust interface to deserialize the output to. (The same crate is used by nextest to generate the output.)
Listing tests
To produce a list of tests in the JSON output format cargo nextest list --message-format json
(or json-pretty
for nicely formatted output). Here's some example output for the tokio repository:
% cargo nextest list -p tokio-util --features full --lib --message-format json-pretty
{
"rust-build-meta": {
"target-directory": "/home/rain/dev/tokio/target",
"base-output-directories": [
"debug"
],
"non-test-binaries": {},
"linked-paths": []
},
"test-count": 4,
"rust-suites": {
"tokio-util": {
"package-name": "tokio-util",
"binary-id": "tokio-util",
"binary-name": "tokio-util",
"package-id": "tokio-util 0.7.3 (path+file:///home/rain/dev/tokio/tokio-util)",
"kind": "lib",
"binary-path": "/home/me/dev/tokio/target/debug/deps/tokio_util-9dd5cbf268a3ffb4",
"build-platform": "target",
"cwd": "/home/me/dev/tokio/tokio-util",
"status": "listed",
"testcases": {
"either::tests::either_is_async_read": {
"ignored": false,
"filter-match": {
"status": "matches"
}
},
"either::tests::either_is_stream": {
"ignored": false,
"filter-match": {
"status": "matches"
}
},
"time::wheel::level::test::test_slot_for": {
"ignored": false,
"filter-match": {
"status": "matches"
}
},
"time::wheel::test::test_level_for": {
"ignored": false,
"filter-match": {
"status": "matches"
}
}
}
}
}
}
The value of "package-id"
can be matched up to the package IDs produced by running cargo metadata
.
Running tests
This is currently not implemented. Help wanted: please post in the issue if you'd like to work on this!
Configuration
cargo-nextest supports repository-specific configuration at the location .config/nextest.toml
from the Cargo workspace root. The location of the configuration file can be overridden with the --config-file
option.
The default configuration shipped with cargo-nextest is:
# This is the default config used by nextest. It is embedded in the binary at
# build time. It may be used as a template for .config/nextest.toml.
[store]
# The directory under the workspace root at which nextest-related files are
# written. Profile-specific storage is currently written to dir/<profile-name>.
dir = "target/nextest"
# This section defines the default nextest profile. Custom profiles are layered
# on top of the default profile.
[profile.default]
# "retries" defines the number of times a test should be retried. If set to a
# non-zero value, tests that succeed on a subsequent attempt will be marked as
# non-flaky. Can be overridden through the `--retries` option.
# Examples
# * retries = 3
# * retries = { backoff = "fixed", count = 2, delay = "1s" }
# * retries = { backoff = "exponential", count = 10, delay = "1s", jitter = true, max-delay = "10s" }
retries = 0
# The number of threads to run tests with. Supported values are either an integer or
# the string "num-cpus". Can be overridden through the `--test-threads` option.
test-threads = "num-cpus"
# The number of threads required for each test. This is generally used in overrides to
# mark certain tests as heavier than others. However, it can also be set as a global parameter.
threads-required = 1
# Show these test statuses in the output.
#
# The possible values this can take are:
# * none: no output
# * fail: show failed (including exec-failed) tests
# * retry: show flaky and retried tests
# * slow: show slow tests
# * pass: show passed tests
# * skip: show skipped tests (most useful for CI)
# * all: all of the above
#
# Each value includes all the values above it; for example, "slow" includes
# failed and retried tests.
#
# Can be overridden through the `--status-level` flag.
status-level = "pass"
# Similar to status-level, show these test statuses at the end of the run.
final-status-level = "flaky"
# "failure-output" defines when standard output and standard error for failing tests are produced.
# Accepted values are
# * "immediate": output failures as soon as they happen
# * "final": output failures at the end of the test run
# * "immediate-final": output failures as soon as they happen and at the end of
# the test run; combination of "immediate" and "final"
# * "never": don't output failures at all
#
# For large test suites and CI it is generally useful to use "immediate-final".
#
# Can be overridden through the `--failure-output` option.
failure-output = "immediate"
# "success-output" controls production of standard output and standard error on success. This should
# generally be set to "never".
success-output = "never"
# Cancel the test run on the first failure. For CI runs, consider setting this
# to false.
fail-fast = true
# Treat a test that takes longer than the configured 'period' as slow, and print a message.
# See <https://nexte.st/book/slow-tests> for more information.
#
# Optional: specify the parameter 'terminate-after' with a non-zero integer,
# which will cause slow tests to be terminated after the specified number of
# periods have passed.
# Example: slow-timeout = { period = "60s", terminate-after = 2 }
slow-timeout = { period = "60s" }
# Treat a test as leaky if after the process is shut down, standard output and standard error
# aren't closed within this duration.
#
# This usually happens in case of a test that creates a child process and lets it inherit those
# handles, but doesn't clean the child process up (especially when it fails).
#
# See <https://nexte.st/book/leaky-tests> for more information.
leak-timeout = "100ms"
[profile.default.junit]
# Output a JUnit report into the given file inside 'store.dir/<profile-name>'.
# If unspecified, JUnit is not written out.
# path = "junit.xml"
# The name of the top-level "report" element in JUnit report. If aggregating
# reports across different test runs, it may be useful to provide separate names
# for each report.
report-name = "nextest-run"
# Whether standard output and standard error for passing tests should be stored in the JUnit report.
# Output is stored in the <system-out> and <system-err> elements of the <testcase> element.
store-success-output = false
# Whether standard output and standard error for failing tests should be stored in the JUnit report.
# Output is stored in the <system-out> and <system-err> elements of the <testcase> element.
#
# Note that if a description can be extracted from the output, it is always stored in the
# <description> element.
store-failure-output = true
# This profile is activated if MIRI_SYSROOT is set.
[profile.default-miri]
# Miri tests take up a lot of memory, so only run 1 test at a time by default.
test-threads = 1
Profiles
With cargo-nextest, local and CI runs often need to use different settings. For example, CI test runs should not be cancelled as soon as the first test failure is seen.
cargo-nextest supports multiple profiles, where each profile is a set of options for cargo-nextest. Profiles are selected on the command line with the -P
or --profile
option. Most individual configuration settings can also be overridden at the command line.
Here is a recommended profile for CI runs:
[profile.ci]
# Print out output for failing tests as soon as they fail, and also at the end
# of the run (for easy scrollability).
failure-output = "immediate-final"
# Do not cancel the test run on the first failure.
fail-fast = false
After checking the profile into .config/nextest.toml
, use cargo nextest --profile ci
in your CI runs.
Note: Nextest's embedded configuration may define new profiles whose names start with
default-
in the future. To avoid backwards compatibility issues, do not name custom profiles starting withdefault-
.
Tool-specific configuration
Some tools that integrate with nextest may wish to customize nextest's defaults. However, in most cases, command-line arguments and repository-specific configuration should still override those defaults.
To support these tools, nextest supports the --tool-config-file
argument. Values to this argument are specified in the form tool:/path/to/config.toml
. For example, if your tool my-tool
needs to call nextest with customized defaults, it should run:
cargo nextest run --tool-config-file my-tool:/path/to/my/config.toml
The --tool-config-file
argument may be specified multiple times. Config files specified earlier are higher priority than those that come later.
Hierarchical configuration
For this example:
Configuration is resolved in the following order:
-
Command-line arguments. For example, if
--retries=3
is specified on the command line, failing tests are retried up to 3 times. -
Environment variables. For example, if
NEXTEST_RETRIES=4
is specified on the command line, failing tests are retried up to 4 times. -
Per-test overrides, if they're supported for this configuration variable.
-
If a profile is specified, profile-specific configuration in
.config/nextest.toml
. For example, if the repository-specific configuration looks like:[profile.ci] retries = 2
then, if
--profile ci
is selected, failing tests are retried up to 2 times. -
If a profile is specified, tool-specific configuration for the given profile.
-
Repository-specific configuration for the
default
profile. For example, if the repository-specific configuration looks like:[profile.default] retries = 5
then failing tests are retried up to 5 times.
-
Tool-specific configuration for the
default
profile. -
The default configuration listed above, which is that tests are never retried.
Environment variables
This section contains information about the environment variables nextest reads and sets.
Environment variables nextest reads
Nextest reads some of its command-line options as environment variables. In all cases, passing in a command-line option overrides the respective environment variable.
NEXTEST_PROFILE
— Nextest profile to use while running tests.NEXTEST_TEST_THREADS
— Number of tests to run simultaneously.NEXTEST_RETRIES
— Number of times to retry running tests.NEXTEST_HIDE_PROGRESS_BAR
— If set to "1", always hide the progress bar.NEXTEST_FAILURE_OUTPUT
andNEXTEST_SUCCESS_OUTPUT
— When standard output and standard error are displayed for failing and passing tests, respectively. See Reporter options for possible values.NEXTEST_STATUS_LEVEL
— Which test statuses (PASS, FAIL etc) to display. See Reporter options for possible values.NEXTEST_FINAL_STATUS_LEVEL
— Which test statuses (PASS, FAIL etc) to display at the end of a test run. See Reporter options for possible values.NEXTEST_VERBOSE
— Verbose output.
Nextest also reads the following environment variables to emulate Cargo's behavior.
CARGO
— Path to thecargo
binary to use for builds.CARGO_TARGET_DIR
— Location of where to place all generated artifacts, relative to the current working directory.CARGO_TARGET_<triple>_RUNNER
— Support for target runners.CARGO_TERM_COLOR
— The default color mode:always
,auto
ornever
.
Cargo-related environment variables nextest reads
Nextest delegates to Cargo for the build, which recognizes a number of environment variables. See Environment variables Cargo reads for a full list.
Environment variables nextest sets
Nextest exposes these environment variables to your tests at runtime only. They are not set at build time because cargo-nextest may reuse builds done outside of the nextest environment.
-
NEXTEST
— always set to"1"
. -
NEXTEST_RUN_ID
— A UUID corresponding to a particular nextest run. All tests run via a particular invocation ofcargo nextest run
will have the same UUID. -
NEXTEST_EXECUTION_MODE
— currently, always set toprocess-per-test
. More options may be added in the future if nextest gains the ability to run all tests within the same process (#27). -
NEXTEST_BIN_EXE_<name>
— The absolute path to a binary target's executable. This is only set when running an integration test or benchmark. The<name>
is the name of the binary target, exactly as-is. For example,NEXTEST_BIN_EXE_my-program
for a binary namedmy-program
.- Binaries are automatically built when the test is built, unless the binary has required features that are not enabled.
- When reusing builds from an archive, this is set to the remapped path within the target directory.
-
NEXTEST_LD_*
andNEXTEST_DYLD_*
— These replicate the values of any environment variables that start with the prefixesLD_
orDYLD_
, such asLD_PRELOAD
orDYLD_FALLBACK_LIBRARY_PATH
.This is a workaround for macOS's System Integrity Protection sanitizing dynamic linker environment variables for processes like the system
bash
, and is particularly relevant for target runners. See this blog post for more about how sanitization works.Note: The
NEXTEST_LD_*
andNEXTEST_DYLD_*
variables are set on all platforms, not just macOS.
Cargo-related environment variables nextest sets
Nextest delegates to Cargo for the build, which controls the environment variables that are set. See Environment variables Cargo sets for crates for a full list.
Nextest also sets these environment variables at runtime, matching the behavior of cargo test:
CARGO
— Path to thecargo
binary performing the build.CARGO_MANIFEST_DIR
— The directory containing the manifest of your package. If--workspace-remap
is passed in, this is set to the remapped manifest directory. You can obtain the non-remapped directory using the value of this variable at compile-time, e.g.env!("CARGO_MANIFEST_DIR")
.CARGO_PKG_VERSION
— The full version of your package.CARGO_PKG_VERSION_MAJOR
— The major version of your package.CARGO_PKG_VERSION_MINOR
— The minor version of your package.CARGO_PKG_VERSION_PATCH
— The patch version of your package.CARGO_PKG_VERSION_PRE
— The pre-release version of your package.CARGO_PKG_AUTHORS
— Colon separated list of authors from the manifest of your package.CARGO_PKG_NAME
— The name of your package.CARGO_PKG_DESCRIPTION
— The description from the manifest of your package.CARGO_PKG_HOMEPAGE
— The home page from the manifest of your package.CARGO_PKG_REPOSITORY
— The repository from the manifest of your package.CARGO_PKG_LICENSE
— The license from the manifest of your package.CARGO_PKG_LICENSE_FILE
— The license file from the manifest of your package.- Environment variables specified in the
[env]
section of.cargo/config.toml
.
Dynamic library paths
Nextest sets the dynamic library path at runtime, similar to what Cargo does. This helps with locating shared libraries that are part of the build process. The variable name depends on the platform:
- Windows:
PATH
- macOS:
DYLD_FALLBACK_LIBRARY_PATH
- Unix:
LD_LIBRARY_PATH
Nextest includes the following paths:
- Search paths included from any build script with the
rustc-link-search
instruction. Paths outside of the target directory are removed. It is the responsibility of the user running nextest to properly set the environment if additional libraries on the system are needed in the search path. - The base output directory, such as
target/debug
, and the "deps" directory. This enables support fordylib
dependencies and rustc compiler plugins.
Nextest currently relies on being invoked as a Cargo subcommand to set the rustc sysroot library path.
Minimum nextest versions
Starting version 0.9.55, nextest lets you set minimum required and recommended versions
per-repository. This is similar to the rust-version
field in
Cargo.toml
.
- If the current version of nextest is lower than the required version, nextest will produce an error and exit with code 92 (
REQUIRED_VERSION_NOT_MET
). - If the current version of nextest is lower than the recommended version, nextest will produce a warning, but will run as normal.
Setting minimum versions
To set a minimum required version, add to .config/nextest.toml
, at the top of
the file:
nextest-version = "0.9.55"
# or
nextest-version = { required = "0.9.55" }
To set a minimum recommended version, add to .config/nextest.toml
:
nextest-version = { recommended = "0.9.55" }
Both required and recommended versions can be set simultaneously:
nextest-version = { required = "0.9.53", recommended = "0.9.55" }
NOTE: Versions of nextest prior to 0.9.55 do not support the
nextest-version
configuration. Depending on how old the version is, nextest may print an "unknown configuration" warning or ignore nextest-version entirely.
Bypassing the version check
Nextest accepts an --override-version-check
CLI option that bypasses the version check. If the override is activated, nextest will print a message informing you of that.
% cargo nextest run --override-version-check info: overriding version check (required: 0.9.55, current: 0.9.54) Finished test [unoptimized + debuginfo] target(s) in 0.22s Starting 191 tests across 13 binaries ...
Showing required and recommended versions
To show and verify the version status, run cargo nextest show-config version
. This will produce output similar to:
% cargo nextest show-config version current nextest version: 0.9.54 version requirements: - required: 0.9.55 evaluation result: does not meet required version error: update nextest with cargo nextest self update, or bypass check with --override-version-check
This command exits with:
- Exit code 92 (
REQUIRED_VERSION_NOT_MET
) if the current version of nextest is lower than the required version. - Exit code 10 (
RECOMMENDED_VERSION_NOT_MET
) if the current version of nextest is lower than the recommended version. This is an advisory exit code that does not necessarily indicate failure. - Exit code 0 if the version check was satisfied, or if the check was overridden.
Note for tool developers
If you're building a tool on top of nextest, you can use tool-specific configuration to define minimum required and recommended nextest versions.
As an exception to the general priority rules with tool-specific configuration, required and recommended versions across all config files (both repository and tool-specific configurations) are taken into account.
For example, if:
- The repository requires nextest 0.9.54.
- There are two tool config files, and the first one requires nextest 0.9.57.
- The second one requires nextest 0.9.60.
Then, nextest will produce an error unless it is at 0.9.60.
Per-test overrides
Nextest supports overriding some settings for subsets of tests, using the filter expression and Rust conditional compilation syntaxes.
Overrides are set via the [[profile.<name>.overrides]]
list. Each override consists of the following:
filter
— The filter expression to match.platform
— The platforms to match.- Supported overrides, which are optional. Currently supported are:
threads-required
— Number of threads required for this test.test-group
— An optional test group for this test.slow-timeout
— Amount of time after which tests are marked slow.leak-timeout
— How long to wait after the test completes for any subprocesses to exit.success-output
andfailure-output
— Control when standard output and standard error are displayed for passing and failing tests, respectively. Values supported are:immediate
: display output as soon as the test fails. Default forfailure-output
.final
: display output at the end of the test run.immediate-final
: display output as soon as the test fails, and at the end of the run.never
: never display output. Default forsuccess-output
.
junit.store-success-output
andjunit.store-failure-output
— Whether to store output for passing and failing tests, respectively, in JUnit reports.
Example
[profile.ci]
retries = 1
[[profile.ci.overrides]]
filter = 'test(/\btest_network_/)'
retries = 4
[[profile.ci.overrides]]
platform = 'x86_64-unknown-linux-gnu'
slow-timeout = "5m"
[[profile.ci.overrides]]
filter = 'test(/\btest_filesystem_/)'
platform = { host = 'cfg(target_os = "macos")' }
leak-timeout = "500ms"
success-output = "immediate"
When --profile ci
is specified:
- for test names that start with
test_network_
(including test names likemy_module::test_network_
), retry tests up to 4 times - on
x86_64-unknown-linux-gnu
, set a slow timeout of 5 minutes - on macOS hosts, for test names that start with
test_filesystem_
(including test names likemy_module::test_filesystem_
), set a leak timeout of 500 milliseconds, and show success output immediately.
Override precedence
Overrides are configured as an ordered list. They're are applied in the following order. For a given test T and a given setting S:
- If nextest is run with
--profile my-profile
, the first override withinprofile.my-profile.overrides
that matches T and configures S. - The first override within
profile.default.overrides
that matches T and configures S. - If nextest is run with
--profile my-profile
, the global configuration for that profile, if it configures S. - The global configuration specified by
profile.default
.
Precedence is evaluated separately for each override. If a particular override does not configure a setting, it is ignored for that setting.
Example
[profile.default]
retries = 0 # this is the default, so it doesn't need to be specified
slow-timeout = "30s"
[[profile.default.overrides]]
filter = 'package(my-package)'
retries = 2
slow-timeout = "45s"
[profile.ci]
retries = 1
slow-timeout = { period = "15s", terminate-after = 2 }
[[profile.ci.overrides]]
filter = 'package(my-package) and test(/^flaky::/)'
retries = 3
If nextest is run with --profile ci
:
- Tests in
my-package
that begin withflaky::
are retried 3 times, and are run with a slow timeout of 45 seconds. - Other tests in
my-package
are retried 2 times and are run with a slow timeout of 45 seconds. - All other tests are retried up to one time and are run with a slow-timeout of 15 seconds. Tests that take longer than 30 seconds are terminated.
If nextest is run without --profile
:
- Tests in
my-package
are retried 2 times and with a slow timeout of 45 seconds. - Other tests are retried 0 times with a slow timeout of 30 seconds.
Specifying platforms for per-test overrides
Per-test overrides support filtering by platform. Either a Rust target triple or cfg()
expression may be specified.
For example, with the following configuration:
[[profile.default.overrides]]
platform = 'cfg(target_os = "linux")'
retries = 3
Test runs on Linux will have 3 retries.
Cross-compiling
While cross-compiling code, nextest's per-test overrides support filtering by either host or target platforms.
If platform
is set to a string, then nextest will consider it to be the target filter. For example, if the following is specified:
[[profile.default.overrides]]
platform = 'aarch64-apple-darwin'
slow-timeout = "120s"
Then test runs performed either natively on aarch64-apple-darwin
, or while cross-compiling from some other operating system to aarch64-apple-darwin
, will be marked slow after 120 seconds.
Starting nextest version 0.9.58, platform
can also be set to a map with host
and target
keys. While determining whether a particular override applies, nextest will apply both host and target filters (AND operation).
For example:
[[profile.default.overrides]]
platform = { host = 'cfg(target_os = "macos")' }
retries = 1
[[profile.default.overrides]]
platform = { host = 'x86_64-unknown-linux-gnu', target = 'cfg(windows)' }
threads-required = 2
With this configuration:
- On macOS hosts (regardless of the target platform), tests will be retried once.
- On x86_64 Linux hosts, while cross-compiling to Windows, tests will be marked as requiring two threads each.
Note: Specifying
platform
as a string is equivalent to specifying it as a map with thetarget
key.
Host tests
While cross-compiling code, some tests may need to be run on the host platform. (See the note about Filtering by build platform for more.)
For tests that run on the host platform, to figure out if an override applies nextest will compute the result of the target filter against the host platform. (If the host
key is specified, it will be considered as well based on the AND semantics listed above.)
This behavior is similar to that of target runners.
Heavy tests and threads-required
Nextest achieves its performance through running many tests in parallel. However, some projects have tests that consume a disproportionate amount of resources like CPU or memory. If too many of these heavy tests are run concurrently, your machine's CPU might be overwhelmed, or it might run out of memory.
With nextest, you can mark heavy tests as taking up multiple threads or "slots" out of the total amount of available parallelism. In other words, you can assign those tests a higher "weight". This is done by using the threads-required
per-test override.
For example, on a machine with 16 logical CPUs, nextest will run 16 tests concurrently by default. However, if you mark tests that begin with tests::heavy::
as requiring 2 threads each:
[[profile.default.overrides]]
filter = 'test(/^tests::heavy::/)'
threads-required = 2
Then each test in the tests::heavy
module will take up 2 of those 16 threads.
The threads-required
configuration can also be set to one of two special values:
"num-cpus"
— The number of logical CPUs on the system."num-test-threads"
— The number of test threads nextest is currently running with.
NOTE:
threads-required
is not meant to ensure mutual exclusion across subsets of tests. See Test groups and mutual exclusion.
Use cases
Some use cases that may benefit from limiting concurrency:
- Integration tests that spin up a network of services to run against.
- Tests that are multithreaded internally, possibly using a custom test harness where a single test is presented to nextest.
- Tests that consume large amounts of memory.
- Tests that must be mutually exclusive with all other tests globally (set
threads-required
tonum-test-threads
).
Tip: Be sure to benchmark your test runs!
threads-required
will often cause test runs to become slower overall. However, setting it might still be desirable if it makes test runs more reliable.
Test groups and mutual exclusion
Starting version 0.9.48, nextest allows users to specify test groups for sets of tests. This lets you configure groups of tests to run serially or with a limited amount of concurrency.
In other words, nextest lets you define logical semaphores and mutexes that apply to certain subsets of tests.
Tests that aren't part of a test group are not affected by these concurrency limits.
If the limit is set to 1, this is similar to cargo test
with the serial_test
crate, or a global mutex.
NOTE: Nextest does not support in-process mutexes (or semaphores), directly. Instead, you can emulate these features by using test groups.
Use cases
- Your tests access a network service (perhaps running on the same system) that can only handle one, or a limited number of, tests being run at a time.
- Your tests run against a global system resource that may fail, or encounter race conditions, if accessed by more than one process at a time.
- Your tests start up a network service that listens on a fixed TCP or UDP port on localhost, and if several tests try to open up the same port concurrently, they'll collide with each other.
While you can use test groups to make your existing network service tests work with nextest, this is not the "correct" way to write such tests. For example, your tests might collide with a network service already running on the system. The logical mutex will also make your test runs slower.
Consider these two recommended approaches instead:
- Use a randomly assigned port. On all platforms you can do this by binding to port 0. Once your test creates the service, you'll need a way to communicate the actual port assigned back to your test.
- If your service is in the same process as your test, you can expose an API to retrieve the actual port assigned.
- If your service is in another process, you'll need a way to communicate the port assigned back to the test. One approach is to pass in a temporary directory as an environment variable, then arrange for the service to write the port number in a file within the temporary directory.
- Rather than using TCP/IP, bind to a Unix domain socket in a temporary directory. This approach also works on Windows.
Configuring test groups
Test groups are specified in nextest's configuration by:
- Declaring test group names along with concurrency limits, using the
max-threads
parameter. - Using the
test-groups
per-test override.
For example:
[test-groups]
resource-limited = { max-threads = 4 }
serial-integration = { max-threads = 1 }
[[profile.default.overrides]]
filter = 'test(resource_limited::)'
test-group = 'resource-limited'
[[profile.default.overrides]]
filter = 'package(integration-tests)'
platform = 'cfg(unix)'
test-group = 'serial-integration'
This configuration defines two test groups:
resource-limited
, which is limited to 4 threads.serial-integration
, which is limited to 1 thread.
These test groups impact execution in the following ways:
- Any tests whose name contains
resource_limited::
will be limited to running four at a time. In other words, there is a logical semaphore around all tests that containresource_limited::
, with four available permits. - On Unix platforms, tests in the
integration-tests
package will be limited to running one at a time, i.e. serially. In other words, on Unix platforms, there is a logical mutex around all tests in theintegration-tests
package. - Tests that are not in either of these groups will run with global concurrency limits.
Nextest will continue to schedule as many tests as possible, accounting for global and group concurrency limits.
Showing test groups
You can show the test groups currently in effect with cargo nextest show-config test-groups
.
With the above example, you might see the following output:
Finished test [unoptimized + debuginfo] target(s) in 0.09s group: resource-limited (max threads = 4) * override for default profile with filter 'test(resource_limited::)': resource-bindings: access::resource_limited::test_resource_access edit::resource_limited::test_resource_edit group: serial-integration (max threads = 1) * override for default profile with filter 'package(integration-tests)': integration-tests::integration: test_service_read test_service_write
This command accepts all the same options that cargo nextest list
does.
Comparison with threads-required
Test groups are similar to heavy tests and threads-required
. The key difference is that test groups are meant to limit concurrency for subsets of tests, while threads-required
sets global limits across the entire test run.
Both of these options can be combined. For example:
[test-groups]
my-group = { max-threads = 8 }
[[profile.default.overrides]]
filter = 'test(/^group::heavy::/)'
test-group = 'my-group'
threads-required = 2
[[profile.default.overrides]]
filter = 'test(/^group::light::/)'
test-group = 'my-group'
threads-required = 1 # this is the default, shown for clarity
With this configuration:
- Tests whose names start with
group::heavy::
, and tests that start withgroup::light::
, are both part ofmy-group
. - The
group::heavy::
tests will take up two slots within both global and group concurrency limits. - The
group::light::
tests will take up one slot within both limits.
NOTE: Setting
threads-required
to be greater than a test group'smax-threads
will not cause issues; a test that does so will take up all slots available.
JUnit support
Nextest can produce output in the JUnit/XUnit XML format. This format is widely understood by test analysis tools and libraries.
To enable JUnit support, add this to your nextest configuration:
[profile.ci.junit] # this can be some other profile, too
path = "junit.xml"
If --profile ci
is selected on the command line, a JUnit report will be written out to target/nextest/ci/junit.xml
within the workspace root.
Some notes about the JUnit support:
- There are several slightly different formats all called "JUnit" or "XUnit". Nextest adheres to the Jenkins XML format.
- Every test binary forms a single
<testsuite>
. Every test forms a single<testcase>
. - Standard output and standard error are included for failed and retried tests. (However, invalid XML characters are stripped out.)
Configuration
Configuration options supported for JUnit reports, within the junit
section:
report-name
— The name of the report. Defaults to"nextest-run"
.store-success-output
— Whether to store output for successful tests in the<system-out>
and<system-err>
elements. Defaults to false.store-failure-output
— Whether to store output for failing tests in the<system-out>
and<system-err>
elements. Defaults to true.
store-success-output
and store-failure-output
can also be configured as per-test overrides.
Example configuration
[profile.default.junit]
path = "junit.xml"
# These are the default values, specified for clarity.
store-success-output = false
store-failure-output = true
[[profile.default.overrides]]
filter = 'test(important-test)'
junit.store-success-output = true
In this example, the JUnit report will contain the output for all failing tests, and for successful tests that contain "important-test" in the name.
Post-processing
Some tools that read JUnit files don't follow the Jenkins standard. You can post-process the JUnit file in such cases. Here's some recommendations for post-processing tools written by community members:
- CircleCI:
circleci-junit-fix
Example
Here's an example JUnit file generated by cargo-nextest
.
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="nextest-run" tests="3" failures="1" errors="0" uuid="9ea8d2ee-e485-4e5b-8dba-4985c12081f7" timestamp="2022-07-28T01:09:27.584+00:00" time="0.007">
<testsuite name="nextest-tests::basic" tests="3" disabled="0" errors="0" failures="1">
<testcase name="test_cwd" classname="nextest-tests::basic" timestamp="2022-07-28T01:09:27.584+00:00" time="0.002">
</testcase>
<testcase name="test_failure_assert" classname="nextest-tests::basic" timestamp="2022-07-28T01:09:27.585+00:00" time="0.002">
<failure type="test failure">thread 'test_failure_assert' panicked at 'assertion failed: `(left == right)`
left: `4`,
right: `5`: this is an assertion', tests/basic.rs:9:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace</failure>
<rerunFailure timestamp="2022-07-28T01:09:27.586+00:00" time="0.002" type="test failure">thread 'test_failure_assert' panicked at 'assertion failed: `(left == right)`
left: `4`,
right: `5`: this is an assertion', tests/basic.rs:9:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
<system-out>
running 1 test
test test_failure_assert ... FAILED
failures:
failures:
test_failure_assert
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 17 filtered out; finished in 0.00s
</system-out>
<system-err>thread 'test_failure_assert' panicked at 'assertion failed: `(left == right)`
left: `4`,
right: `5`: this is an assertion', tests/basic.rs:9:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</system-err>
</rerunFailure>
<rerunFailure timestamp="2022-07-28T01:09:27.588+00:00" time="0.002" type="test failure">thread 'test_failure_assert' panicked at 'assertion failed: `(left == right)`
left: `4`,
right: `5`: this is an assertion', tests/basic.rs:9:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
<system-out>
running 1 test
test test_failure_assert ... FAILED
failures:
failures:
test_failure_assert
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 17 filtered out; finished in 0.00s
</system-out>
<system-err>thread 'test_failure_assert' panicked at 'assertion failed: `(left == right)`
left: `4`,
right: `5`: this is an assertion', tests/basic.rs:9:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</system-err>
</rerunFailure>
<system-out>
running 1 test
test test_failure_assert ... FAILED
failures:
failures:
test_failure_assert
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 17 filtered out; finished in 0.00s
</system-out>
<system-err>thread 'test_failure_assert' panicked at 'assertion failed: `(left == right)`
left: `4`,
right: `5`: this is an assertion', tests/basic.rs:9:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</system-err>
</testcase>
<testcase name="test_flaky_mod_4" classname="nextest-tests::basic" timestamp="2022-07-28T01:09:27.589+00:00" time="0.001">
<flakyFailure timestamp="2022-07-28T01:09:27.586+00:00" time="0.002" type="test failure">thread 'test_flaky_mod_4' panicked at 'Failed because attempt 1 % 4 != 0', tests/basic.rs:33:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
<system-out>
running 1 test
test test_flaky_mod_4 ... FAILED
failures:
failures:
test_flaky_mod_4
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 17 filtered out; finished in 0.00s
</system-out>
<system-err>thread 'test_flaky_mod_4' panicked at 'Failed because attempt 1 % 4 != 0', tests/basic.rs:33:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</system-err>
</flakyFailure>
<flakyFailure timestamp="2022-07-28T01:09:27.587+00:00" time="0.002" type="test failure">thread 'test_flaky_mod_4' panicked at 'Failed because attempt 2 % 4 != 0', tests/basic.rs:33:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
<system-out>
running 1 test
test test_flaky_mod_4 ... FAILED
failures:
failures:
test_flaky_mod_4
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 17 filtered out; finished in 0.00s
</system-out>
<system-err>thread 'test_flaky_mod_4' panicked at 'Failed because attempt 2 % 4 != 0', tests/basic.rs:33:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</system-err>
</flakyFailure>
</testcase>
</testsuite>
</testsuites>
Integrations with other tools
This section covers other tools that integrate with nextest. Supported are:
If your tool integrates with nextest, please feel free to open an issue to discuss including it in this section!
Test coverage
Test coverage support is provided by third-party tools that wrap around nextest.
llvm-cov
cargo-llvm-cov supports nextest. To generate llvm-cov data with nextest, run:
cargo install cargo-llvm-cov
cargo llvm-cov nextest
Using llvm-cov in GitHub Actions
Install Rust with the llvm-tools-preview
component, nextest, and llvm-cov in GitHub Actions. Then, run cargo llvm-cov nextest
.
- uses: dtolnay/rust-toolchain@stable
with:
components: llvm-tools-preview
- uses: taiki-e/install-action@cargo-llvm-cov
- uses: taiki-e/install-action@nextest
- name: Collect coverage data
run: cargo llvm-cov nextest
See this in practice with nextest's own CI.
TODO: provide instructions for other report forms like HTML, and for reporting to an external code coverage service.
Integrating nextest into coverage tools
Most coverage tools work by setting a few environment variables such as RUSTFLAGS
or RUSTC_WRAPPER
. Nextest runs Cargo for the build, which will read those environment variables as usual. This means that it should generally be quite straightforward to integrate nextest into other coverage tools.
If you've integrated nextest into a coverage tool, feel free to submit a pull request with documentation.
Miri and nextest
Nextest works with the Miri interpreter for Rust. This interpreter can check for certain classes of undefined behavior. It can also run your tests for (almost) arbitrary targets.
Benefits
The main benefit of using nextest with Miri is that each test runs in its own process. This means that it's easier to identify memory leaks, for example.
Miri can be very taxing on most computers. If nextest is run under Miri, it configures itself to use 1 thread by default. This mirrors cargo miri test
. You can customize this with the --test-threads
/-j
option.
Requirements
- cargo-nextest 0.9.29 or above
- Miri from Rust nightly 2022-07-24 or after
Usage
After installing Miri, run:
cargo miri nextest run
You may need to specify the toolchain to run as, using cargo +nightly-YYYY-MM-DD miri nextest run
.
Miri supports cross-interpretation, so e.g. to run your tests on a big-endian target, run:
cargo miri nextest run --target mips64-unknown-linux-gnuabi64
This does not require installing any special toolchain, and will work even if you are using macOS or Windows.
Note: Archiving and reusing builds is not supported under Miri.
Configuring nextest running under Miri
If nextest detects a Miri environment, it uses the default-miri
profile by default. Add repository-specific Miri configuration to this profile. For example, to terminate tests after 2 minutes, add this to .config/nextest.toml
:
[profile.default-miri]
slow-timeout = { period = "60s", terminate-after = 2 }
Criterion benchmarks
Nextest supports running benchmarks in "test mode" with Criterion.rs.
What is test mode?
Many benchmarks depend on the system that's running them being quiescent. In other words, while benchmarks are being run there shouldn't be any other user or system activity. This can make benchmarks hard or even unsuitable to run in CI systems like GitHub Actions, where the capabilities of individual runners vary or are too noisy to produce useful results.
However, it can still be good to verify in CI that benchmarks compile correctly, and don't panic when run. To support this use case, libraries like Criterion support running benchmarks in "test mode".
For criterion and nextest, benchmarks are run with the following settings:
- With the
test
Cargo profile. This is typically the same as thedev
profile, and can be overridden with--cargo-profile
. - With one iteration of the benchmark.
Requirements
- Criterion 0.5.0 or above; previous versions are not compatible with nextest.
- Any recent version of cargo-nextest.
Running benchmarks
By default, cargo nextest run
does not include benchmarks as part of the test run. (This matches cargo test
.)
To include benchmarks in your test run, use cargo nextest run --all-targets
.
This will produce output that looks like:
% cargo nextest run --all-targets Finished test [unoptimized + debuginfo] target(s) in 0.05s Starting 7 tests across 1 binaries PASS [ 0.368s] my-benchmarks::bench/my_bench depends_on_cache PASS [ 0.404s] my-benchmarks::bench/my_bench depends_on PASS [ 0.443s] my-benchmarks::bench/my_bench into_ids PASS [ 0.520s] my-benchmarks::bench/my_bench make_graph PASS [ 0.546s] my-benchmarks::bench/my_bench resolve_package PASS [ 0.588s] my-benchmarks::bench/my_bench make_cycles PASS [ 0.625s] my-benchmarks::bench/my_bench make_package_name ------------ Summary [ 0.626s] 7 tests run: 7 passed, 0 skipped
To run just benchmarks in test mode, use cargo nextest run --benches
.
Stability policy
This section contains information on how cargo-nextest will evolve in a backwards-compatible way over time.
The cargo-nextest binary
The cargo-nextest binary follows semantic versioning, where the public API consists of exactly the following:
- command-line arguments, options and flags
- machine-readable output
- the configuration file format
Experimental features are not part of the public API. They may change or be removed in a patch release.
Within a version series, the public API will be append-only. New options or keys may be added, but existing keys will continue to be as they were. Existing options may be deprecated but will not be removed within a version series.
The public API does not include human-readable output generated by nextest, or in general anything printed to stderr.
Libraries
The libraries used by cargo-nextest, nextest-metadata and nextest-runner, follow the standard Rust library versioning policy.
nextest-metadata
nextest-metadata contains data structures for deserializing cargo-nextest's machine-readable output.
nextest-metadata is forward-compatible with cargo-nextest. A given version of nextest-metadata will continue to work with all future versions of cargo-nextest (within the same version series).
However, currently, nextest-metadata is not backwards-compatible: a new version of nextest-metadata may not be able to parse metadata generated by older versions of cargo-nextest. In other words, each version of nextest-metadata has a minimum supported cargo-nextest version, analogous to a crate's minimum supported Rust version (MSRV).
A bump to the minimum supported cargo-nextest version is considered a breaking change, and will be paired with a major version bump.
(The policy around backwards compatibility may be relaxed in the future based on user needs.)
nextest-runner
nextest-runner is built to serve the needs of cargo-nextest. Every cargo-nextest
release is likely to correspond to a breaking change to nextest-runner.
Minimum supported Rust version (MSRV)
The MSRV of cargo-nextest or dependent crates may be changed in a patch release. At least the last 3 versions of Rust will be supported at any time.
Experimental features
This section documents new features in nextest that aren't stable yet. These features may be changed or removed at any time, and must be accessed through a configuration option.
Current features
Setup scripts
- Nextest version: 0.9.59 and above
- Enable with: Add
experimental = ["setup-scripts"]
to.config/nextest.toml
- Tracking issue: #978
Nextest supports running setup scripts before tests are run. Setup scripts can be scoped to particular tests via filter expressions.
Setup scripts are configured in two parts: defining scripts, and setting up rules for when they should be executed.
Defining scripts
Setup scripts are defined using the top-level script
configuration. For example, to define a script named "my-script", which runs my-script.sh
:
[script.my-script]
command = 'my-script.sh'
Commands can either be specified using Unix shell rules, or as a list of arguments. In the following example, script1
and script2
are equivalent.
[script.script1]
command = 'script.sh -c "Hello, world!"'
[script.script2]
command = ['script.sh', '-c', 'Hello, world!']
Setup scripts can have the following configuration options attached to them:
slow-timeout
: Mark a setup script as slow or terminate it, using the same configuration as for tests. By default, setup scripts are not marked as slow or terminated (this is different from the slow timeout for tests).leak-timeout
: Mark setup scripts leaky after a timeout, using the same configuration as for tests. By default, the leak timeout is 100ms.capture-stdout
:true
if the script's standard output should be captured,false
if not. By default, this isfalse
.capture-stderr
:true
if the script's standard error should be captured,false
if not. By default, this isfalse
.
Example
[script.db-generate]
command = 'cargo run -p db-generate'
slow-timeout = { period = "60s", terminate-after = 2 }
leak-timeout = "1s"
capture-stdout = true
capture-stderr = false
Setting up rules
In configuration, you can create rules for when to use scripts on a per-profile basis. This is done via the profile.<profile-name>.scripts
array. For example, you can set up a script that generates a database if tests from the db-tests
package, or any packages that depend on it, are run.
[[profile.default.scripts]]
filter = 'rdeps(db-tests)'
setup = 'db-generate'
(This example uses the rdeps
filter expression predicate.)
Setup scripts can also filter based on platform, using the rules listed in Specifying platforms:
[[profile.default.scripts]]
platform = { host = "cfg(unix)" }
setup = 'script1'
A set of scripts can also be specified. All scripts in the set will be executed.
[[profile.default.scripts]]
filter = 'test(/^script_tests::/)'
setup = ['script1', 'script2']
Script execution
A given setup script S is only executed if the current profile has at least one rule where the filter
and platform
predicates match the current execution environment, and the setup script S is listed in setup
.
Setup scripts are executed serially, in the order they are defined (not the order they're specified in the rules). If any setup script exits with a non-zero exit code, the entire test run is terminated.
Environment variables
Setup scripts can define environment variables that will be exposed to tests that match the script. This is done by writing to the $NEXTEST_ENV
environment variable from within the script.
For example, let's say you have a script my-env-script.sh
:
#!/bin/bash
# Exit with 1 if NEXTEST_ENV isn't defined.
if [ -z "$NEXTEST_ENV" ]; then
exit 1
fi
# Write out an environment variable to $NEXTEST_ENV.
echo "MY_ENV_VAR=Hello, world!" >> "$NEXTEST_ENV"
And you define a setup script and a corresponding rule:
[script.my-env-script]
command = 'my-env-script.sh'
[[profile.default.scripts]]
filter = 'test(my_env_test)'
setup = 'my-env-script'
Then, in tests which match this script, the environment variable will be available:
#![allow(unused)] fn main() { #[test] fn my_env_test() { assert_eq!(std::env::var("MY_ENV_VAR"), Ok("Hello, world!".to_string())); } }
How nextest works
To understand how nextest works, it is useful to first look at the execution model used by cargo test.
The cargo test execution model
By default, cargo test uses this execution model:
In this model, each test binary is run serially, and binaries are responsible for running individual tests in parallel.
This model provides the greatest flexibility because the only interface between cargo test and the binary is the exit code. For cargo test, a binary exiting with exit code 0 means that all tests within it passed, while a non-zero exit code means that some tests within it failed.
However, this model has several problems:
- There's no structured way to get individual test pass/fail status, or the time taken by each test.
- The first binary failure means that no further tests are run, unless
--no-fail-fast
is passed in. If that argument is passed in in CI scenarios, then failing test output is not printed at the end, making it hard to figure out which tests failed. - Performance can be affected by test bottlenecks. For example, if a binary has 20 tests and 19 of them take less than 5s, while one of them takes 60s, then the test binary will take 60s to complete execution.
cargo test
has no way to start running other test binaries in those last 55 seconds.
The nextest model
cargo-nextest uses a very different execution model, inspired by state-of-the-art test runners used at large corporations. Here's what cargo-nextest does:
A cargo-nextest run has two separate phases:
- The list phase. cargo-nextest first builds all test binaries with
cargo test --no-run
, then queries those binaries to produce a list of all tests within them. - The run phase. cargo-nextest then executes each individual test in a separate process, in parallel. It then collects, displays and aggregates results for each individual test.
This model solves all the problems of cargo test's execution model, at the cost of a significantly thicker interface to test binaries. This means that custom test harnesses may need to be adapted to work with cargo-nextest.
Contributing features back to cargo?
Readers may be wondering if any of this work will be contributed back to cargo.
There is currently an ongoing effort to add parts of nextest to cargo test. However, there are a few reasons nextest remains a separate project for now:
- As documented above, nextest has a significantly thicker interface with the test binary than Cargo does.
cargo test
cannot just change how it works without breaking backwards compatibility, while nextest did not have this constraint at the time it was created. - While nextest aims to be stable as far as possible, it has fewer stability guarantees than Cargo does. It is easier to experiment with improvements without having to worry about the long-term stability guarantees provided by Cargo, or go through the (necessarily) heavyweight Rust RFC process.
- Ultimately, the primary maintainer of nextest considers it a more efficient use of their time to maintain nextest, than to try and port the changes over to
cargo test
(which won't make nextest fully redundant anyway, so it would still need to be maintained).
With all that said, we'd love to see how cargo changes over time. However the expectation is that nextest will always have a role as a place to experiment with UX and workflow improvements.
Benchmarks
Nextest's execution model generally leads to faster test runs than Cargo. How much faster depends on the specifics, but here are some general guidelines:
- Larger workspaces will see a greater benefit. This is because larger workspaces have more crates, more test binaries, and more potential spots for bottlenecks.
- Bottlenecks with "long pole" tests. Nextest excels in situations where there are bottlenecks in multiple test binaries: cargo test can only run them serially, while nextest can run those tests in parallel.
- Build caching. Test runs are one component of end-to-end execution times. Speeding up the build by using sccache, the Rust Cache GitHub Action, or similar, will make test run times be a proportionally greater part of overall times.
Even if nextest doesn't result in faster test runs, you may find it useful for identifying test bottlenecks, for its user interface, or for its other features.
Results
Project | Revision | Test count | cargo test (s) | nextest (s) | Improvement |
---|---|---|---|---|---|
crucible | cb228c2b | 483 | 5.14 | 1.52 | 3.38× |
guppy | 2cc51b41 | 271 | 6.42 | 2.80 | 2.29× |
mdBook | 0079184c | 199 | 3.85 | 1.66 | 2.31× |
meilisearch | bfb1f927 | 721 | 57.04 | 28.99 | 1.96× |
omicron | e7949cd1 | 619 | 444.08 | 202.50 | 2.19× |
penumbra | 4ecd94cc | 144 | 125.38 | 90.96 | 1.37× |
reqwest | 3459b894 | 113 | 5.57 | 2.26 | 2.48× |
ring | 450ada28 | 179 | 13.12 | 9.40 | 1.39× |
tokio | 1f50c571 | 1138 | 24.27 | 11.60 | 2.09× |
Specifications
All measurements were done on:
- Processor: AMD Ryzen 9 7950X x86_64, 16 cores/32 threads
- Operating system: Pop_OS! 22.04 running Linux kernel 6.0.12
- RAM: 64GB
- Rust: version 1.66.0
The commands run were:
- cargo test:
cargo test --workspace --bins --lib --tests --examples --no-fail-fast
(to exclude doctests since they're not supported by nextest) - nextest:
cargo nextest run --workspace --bins --lib --tests --examples --no-fail-fast
The measurements do not include time taken to build the tests. To ensure that, each command was run 5 times in succession. The measurement recorded is the minimum of runs 3, 4 and 5.
Custom test harnesses
A custom test harness is defined in Cargo.toml
as:
[[test]]
name = "my-test"
harness = false
As mentioned in How nextest works, cargo-nextest has a much thicker interface with the test harness than cargo test does. If you don't use any custom harnesses, cargo-nextest will run out of the box. However, custom test harnesses need to follow certain rules in order to work with nextest.
libtest-mimic (recommended)
Nextest is compatible with custom test harnesses based on libtest-mimic, version 0.4.0, or 0.5.2 or above (note that 0.5.0 and 0.5.1 have a regression). Using this crate is recommended.
Example: datatest-stable
For a test harness based on libtest-mimic, see datatest-stable. This harness implements data-driven tests.
- datatest-stable can be used out of the box if each test is specified by a file within a particular directory on disk. For example, it is used by the Move smart contract language for many of its internal tests. With these tests, the harness is used to verify that each
.mvir
input results in the.exp
output. - datatest-stable also serves as an example for how to write your own custom test harness, if you need to.
Note: Versions of libtest-mimic prior to 0.4.0 are not compatible with nextest.
Manually implementing a test harness
For your test harness to work with nextest, follow these rules (keywords are as in RFC 2119):
-
The test harness MUST support being run with
--list --format terse
. This command MUST print to stdout all tests in exactly the formatmy-test-1: test my-test-2: test
Other output MUST NOT be written to stdout.
Custom test harnesses that are meant to be run as a single unit MUST produce just one line in the output.
-
The test harness MUST support being run with
--list --format terse --ignored
. This command MUST print to stdout exactly the set of ignored tests (however the harness defines them) in the same format as above. If there are no ignored tests or if the test harness doesn't support ignored tests, the output MUST be empty. The set of ignored tests MUST be either of the following two options:- A subset of the tests printed out without
--ignored
; this is what libtest does. - A completely disjoint set of tests from those printed out without
--ignored
.
- A subset of the tests printed out without
-
Test names that are not at the top level (however the harness defines this) SHOULD be returned as
path::to::test::test_name
. This is recommended because the cargo-nextest UI uses::
as a separator to format test names nicely. -
The test harness MUST support being run with
<test-name> --nocapture --exact
. This command will be called with every test name provided by the harness in--list
above.
Changelog
This page documents new features and bugfixes for cargo-nextest. Please see the stability policy for how versioning works with cargo-nextest.
0.9.59 - 2023-09-27
Added
- Experimental support for setup scripts. Please try them out, and provide feedback in the tracking issue!
- Support for newer error messages generated by Rust 1.73.
Fixed
deps()
andrdeps()
predicates in per-test overrides were previously not working correctly. With this version they now work.
0.9.58 - 2023-09-20
Added
- Per-test overrides can now be filtered separately by host and target platforms.
- New
--cargo-quiet
and--cargo-verbose
options to control Cargo's quiet and verbose output options. Thanks Oliver Tale-Yazdi for your first contribution!
Fixed
- Improved color support by pulling in zkat/supports-color#14. Now nextest should produce color more often when invoked over SSH.
0.9.57 - 2023-08-02
Fixed
- Fixed case when
.config/nextest.toml
isn't present (#926).
0.9.56 - 2023-08-02
Fixed
nextest-version
is now parsed in a separate pass. This means that error reporting in case there's an incompatible config is now better.
0.9.55 - 2023-07-29
Added
- Support for Cargo's
--timings
option (#903). - Support for required and recommended versions, via a new
nextest-version
top-level configuration option. See Minimum nextest versions for more.
Fixed
- Detect tests if debug level
line-tables-only
is passed in (#910).
0.9.54 - 2023-06-25
Added
Custom targets
- Nextest now supports custom targets specified via
--target
,CARGO_BUILD_TARGET
, or configuration. See the Rust Embedonomicon for how to create a custom target. - For per-test overrides, platform filters now support custom target triples.
0.9.53 - 2023-05-15
Added
- Filter expressions in TOML files can now be specified as multiline TOML strings. For example:
[[profile.default.overrides]]
filter = '''
test(my_test)
| package(my-package)
'''
# ...
Changed
show-config test-groups
now shows a clean representation of filter expressions, to enable printing out multiline expressions neatly.
0.9.52 - 2023-05-04
Fixed
- Updated dependencies to resolve a build issue on Android (#862).
0.9.51 - 2023-03-19
Changed
- The definition of
threads-required
has changed slightly. Previously, it was possible for global and group concurrency limits to be exceeded in some circumstances. Now, concurrency limits are never exceeded. This enables some new use cases, such as being able to declare that a test is mutually exclusive with all other tests globally.
0.9.50 - 2023-03-13
Added
cargo nextest r
added as a shortcut forcargo nextest run
.
Fixed
- Switched to using OpenSSL on RISC-V, since ring isn't available on that platform.
0.9.49 - 2023-01-13
Added
- New configuration settings added to JUnit reports:
junit.store-success-output
(defaults to false) andjunit.store-failure-output
(defaults to true) control whether output for passing and failing tests should be stored in the JUnit report. - The following configuration options can now be specified as per-test overrides:
success-output
andfailure-output
.junit.store-success-output
andjunit.store-failure-output
.
0.9.48 - 2023-01-02
Added
-
You can now mark certain groups of tests to be run with a limited amount of concurrency within the group. This can be used to run tests within a group serially, similar to the
serial_test
crate.For more about test groups, see Test groups and mutual exclusion.
-
A new
show-config test-groups
command shows test groups currently in effect. (show-config
will be broadened to show other kinds of configuration in future releases.) -
Nextest now warns you if you've defined a profile in the
default-
namespace that isn't already known. Any profile names starting withdefault-
are reserved for future use.Thanks Marcelo Nicolas Gomez Rivera for your first contribution!
Changed
-
On Unix platforms, nextest now uses a new double-spawn test execution mode. This mode resolves some race conditions around signal handling without an apparent performance cost.
This mode is not expected to cause any issues. However, if it does, you can turn it off by setting
NEXTEST_DOUBLE_SPAWN=0
in your environment. (Please report an issue if it does!) -
MSRV updated to Rust 1.64.
0.9.47 - 2022-12-10
Fixed
cargo nextest run -E 'deps(foo)
queries now work again. Thanks Simon Paitrault for your first contribution!
0.9.46 - 2022-12-10
This version was not published due to a packaging issue.
0.9.45 - 2022-12-04
Added
- Support for listing and running tests in examples with the
--examples
and--example <EXAMPLE>
command-line arguments. Thanks Jed Brown for your first contribution! - Pre-built binaries are now available for FreeBSD and illumos. Due to GitHub Actions limitations, nextest is not tested on these platforms and might be broken.
0.9.44 - 2022-11-23
Added
Double-spawning test processes
On Unix platforms, a new experimental "double-spawn" approach to running test binaries has been added. With the double-spawn approach, when listing or running tests, nextest will no longer spawn test processes directly. Instead, nextest will first spawn a copy of itself, which will do some initial setup work and then exec
the test process.
The double-spawn approach is currently disabled by default. It can be enabled by setting NEXTEST_EXPERIMENTAL_DOUBLE_SPAWN=1
in your environment.
The double-spawn approach will soon be enabled the default.
Pausing and resuming test runs
Nextest now has initial support for handling SIGTSTP (Ctrl-Z) and SIGCONT (fg
). On SIGTSTP (e.g. when Ctrl-Z is pressed), all running tests and timers are paused, and nextest is suspended. On SIGCONT (e.g. when fg
is run), tests and timers are resumed.
Note that, by default, pressing Ctrl-Z in the middle of a test run can lead to nextest runs hanging sometimes. These nondeterministic hangs will not happen if both of the following are true:
-
Nextest is built with Rust 1.66 (currently in beta) or above. Rust 1.66 contains a required fix to upstream Rust.
Note that the pre-built binaries for this version are built with beta Rust to pick this fix up.
-
The double-spawn approach is enabled (see above) with
NEXTEST_EXPERIMENTAL_DOUBLE_SPAWN=1
.
Call for testing: Please try out the double-spawn approach by setting NEXTEST_EXPERIMENTAL_DOUBLE_SPAWN=1
in your environment. It has been extensively tested and should not cause any breakages, but if it does, please report an issue. Thank you!
Fixed
- Fixed an issue with nextest hanging on Windows with spawned processes that outlive the test (#656). Thanks to Chip Senkbeil for reporting it and providing a minimal example!
0.9.43 - 2022-11-04
Nextest is now built with Rust 1.65. This version of Rust is the first one to spawn processes using posix_spawn
rather than fork
/exec
on macOS, which should lead to performance benefits in some cases.
For example, on an M1 Mac Mini, with the clap repository at 520145e
, and the command cargo nextest run -E 'not (test(ui_tests) + test(example_tests))'
:
- Before: 0.636 seconds
- After: 0.284 seconds (2.23x faster)
This is a best-case scenario; tests that take longer to run will generally benefit less.
Added
- The
threads-required
configuration now supports the values "num-cpus", for the total number of logical CPUs available, and "num-test-threads", for the number of test threads nextest is running with. - Nextest now prints a warning if a configuration setting is unknown.
Fixed
- Configuring
retries = 0
now works correctly. Thanks xxchan for your first contribution!
0.9.42 - 2022-11-01
Added
- Added a new
threads-required
configuration that can be specified as a per-test override. This can be used to limit concurrency for heavier tests, to avoid overwhelming CPU or running out of memory.
0.9.41 - 2022-11-01
This release ran into an issue during publishing and was skipped.
0.9.40 - 2022-10-25
Added
-
Overrides can now be restricted to certain platforms, using triples or
cfg()
expressions. For example, to add retries, but only on macOS:[[profile.default.overrides]] platform = 'cfg(target_os = "macos")' retries = 3
For an override to match,
platform
andfilter
(if specified) must both be true for a given test. While cross-compiling code,platform
is matched against the host platform for host tests, and against the target platform for target tests. -
Nextest now reads environment variables specified in the
[env]
section from.cargo/config.toml
files. The full syntax is supported includingforce
andrelative
.Thanks to Waleed Khan for your first contribution!
-
Nextest now sets the
CARGO_PKG_RUST_VERSION
environment variable when it runs tests. Forcargo test
this was added in Rust 1.64, but nextest sets it across all versions of Rust.
0.9.39 - 2022-10-14
Added
-
On Unix platforms, if a process times out, nextest attempts to terminate it gracefully by sending it
SIGTERM
, waiting for a grace period of 10 seconds, and then sending itSIGKILL
. A custom grace period can now be specified through theslow-timeout.grace-period
parameter. For more information, see How nextest terminates tests.Thanks to Ben Kimock for your first contribution!
Internal improvements
- Updated clap to version 4.0.
0.9.38 - 2022-10-05
Added
- Test retries now support fixed delays and exponential backoffs, with optional jitter. See Delays and backoff for more information. Thanks Tomas Olvecky for your first contribution!
Internal improvements
- Reading test data from standard output and standard error no longer buffers twice, just once. Thanks Jiahao XU for your first contribution!
Note to distributors: now that Rust 1.64 is out,
process_group_bootstrap_hack
is no longer supported or required. Please remove the following environment variables if you've set them:
RUSTC_BOOTSTRAP=1
RUSTFLAGS='--cfg process_group --cfg process_group_bootstrap_hack'
0.9.37 - 2022-09-30
Added
- Support for a negative value for
--test-threads
/-j
, matching support in recent versions of Cargo. A value of -1 means the number of logical CPUs minus 1, and so on. Thanks Onigbinde Oluwamuyiwa Elijah for your first contribution! - Add a note to the help text for
--test-threads
indicating that its default value is obtained from the profile. Thanks jiangying for your first contribution!
Changed
- Internal dependency target-spec bumped to 1.2.0 -- this means that newer versions of the windows crate are now supported.
- MSRV updated to Rust 1.62.
0.9.36 - 2022-09-07
Added
- A new
--hide-progress-bar
option (environment variableNEXTEST_HIDE_PROGRESS_BAR
) forces the progress bar to be hidden. Thanks Remo Senekowitsch for your first contribution!
Changed
- Nextest now prints out a list of failing and flaky tests at the end of output by default (the
final-status-level
config is set toflaky
). - The progress bar is now hidden if a CI environment is detected.
0.9.35 - 2022-08-17
Added
-
Support for the
--config
argument, stabilized in Rust 1.63. This option is used to configure Cargo, not nextest. This argument is passed through to Cargo, and is also used by nextest to determine e.g. the target runner for a platform.--config
is also how Miri communicates with nextest. -
Target runners for cross-compilation now work with build archives. Thanks Pascal Kuthe for your first contribution!
0.9.34 - 2022-08-12
Added
- For
cargo nextest self update
, added-f
as a short-form alias for--force
.
Fixed
- Tests are no longer retried after a run is canceled. Thanks iskyzh for your contribution!
0.9.33 - 2022-07-31
Fixed
- Fixed regression in cargo-nextest 0.9.32 where it no longer produced any output if stderr wasn't a terminal.
0.9.32 - 2022-07-30
Added
cargo nextest run
now has a new--no-run
feature to build but not run tests. (This was previously achievable withcargo nextest list -E 'none()'
, but is more intuitive this way.)- Pre-built binaries are now available for i686 Windows. Thanks Guiguiprim!
Internal improvements
- Filter expression evaluation now uses a stack machine via the recursion crate. Thanks Inanna for your first contribution!
0.9.31 - 2022-07-27
Added
- Nextest sets a new
NEXTEST_RUN_ID
environment variable with a UUID for a test run. All tests run within a single invocation ofcargo nextest run
will set the same run ID. Thanks mitsuhiko for your first contribution!
0.9.30 - 2022-07-25
Fixed
-
Fixed target runners specified as relative paths.
-
On Unix, cargo-nextest's performance had regressed (by 3x on clap) due to the change introduced in version 0.9.29 to put each test process into its own process group. In this version, this regression has been fixed, but only if you're using the pre-built binaries or building on Rust 1.64+ (currently in nightly).
Note to distributors: to fix this regression while building with stable Rust 1.62, set the following environment variables:
RUSTC_BOOTSTRAP=1
RUSTFLAGS='--cfg process_group --cfg process_group_bootstrap_hack'
This is temporary until the
process_set_process_group
feature is stabilized in Rust 1.64.
0.9.29 - 2022-07-24
Added
-
On Unix, each test process is now put into its own process group. If a test times out or Ctrl-C is pressed, the entire process group is signaled. This means that most subprocesses spawned by tests are also killed.
However, because process groups aren't nested, if a test creates a process group itself, those groups won't be signaled. This is a relatively uncommon situation.
-
On Windows, each test process is now associated with a job object. On timeouts, the entire job object is terminated. Since job objects are nested in recent versions of Windows, this should result in all subprocesses spawned by tests being killed.
(On Windows, the Ctrl-C behavior hasn't changed. Nextest also doesn't do graceful shutdowns on Windows yet, though this may change in the future.)
-
Nextest can now parse Cargo configs specified via the unstable
--config
option. -
Nextest now publishes binaries for
aarch64-unknown-linux-gnu
(#398) andx86_64-unknown-linux-musl
(#399). Thanks messense and Teymour for your first contributions!
Fixed
- Per-test overrides are now additive across configuration files (including tool-specific configuration files).
0.9.29-rc.1 - 2022-07-24
This is a test release to ensure that releasing Linux aarch64 and musl binaries works well.
0.9.28 - 2022-07-22
This is a quick hotfix release to ensure that the right tokio
features are enabled under
default-no-update
.
0.9.27 - 2022-07-22
This is a major architectural rework of nextest. We've tested it thoroughly to the best of our ability, but if you see regressions please report them!
If you encounter a regression, you can temporarily pin nextest to the previous version in CI. If you're on GitHub Actions and are using taiki-e/install-action
, use this instead:
- uses: taiki-e/install-action@v1
- with:
tool: nextest
version: 0.9.26
Added
- Nextest now works with the Miri interpreter. Use
cargo miri nextest run
to run your tests with Miri. - Nextest now detects some situations where tests leak subprocesses. Previously, these situations would cause nextest to hang.
- Per-test overrides now support
slow-timeout
and the newleak-timeout
config parameter. - A new option
--tool-config-file
allows tools that wrap nextest to specify custom config settings, while still prioritizing repository-specific configuration.
Changed
- Major internal change: The nextest list and run steps now use Tokio. This change enables the leak detection described above.
- The list step now runs list commands in parallel. This should result in speedups in most cases.
Fixed
- Nextest now redirects standard input during test runs to
/dev/null
(orNUL
on Windows). Most tests do not read from standard input, but if a test does, it will no longer cause nextest to hang. - On Windows, nextest configures standard input, standard output and standard error to not be inherited. This prevents some kinds of test hangs on Windows.
- If a dynamic library link path doesn't exist, nextest no longer adds it to
LD_LIBRARY_PATH
or equivalent. This should have no practical effect. - Archiving tests now works even if the target directory is not called
"target"
.
0.9.26 - 2022-07-14
This is a quick hotfix release to update the version of nextest-metadata, to which a breaking change was accidentally committed.
0.9.25 - 2022-07-13
This is a major release with several new features.
Filter expressions
Filter expressions are now ready for production. For example, to run all tests in nextest-runner
and all its transitive dependencies within the workspace:
cargo nextest run -E 'deps(nextest-runner)'
This release includes a number of additions and changes to filter expressions.
Added
- The expression language supports several new predicates:
kind(name-matcher)
: include all tests in binary kinds (e.g.lib
,test
,bench
) matchingname-matcher
.binary(name-matcher)
: include all tests in binary names matchingname-matcher
.platform(host)
orplatform(target)
: include all tests that are built for the host or target platform, respectively.
Changed
-
If a filter expression is guaranteed not to match a particular binary, it will not be listed by nextest. (This allows
platform(host)
andplatform(target)
to work correctly.) -
If both filter expressions and standard substring filters are passed in, a test must match filter expressions AND substring filters to be executed. For example:
cargo nextest run -E 'package(nextest-runner)' test_foo test_bar
This will execute only the tests in nextest-runner
that match test_foo
or test_bar
.
Per-test overrides
Nextest now supports per-test overrides. These overrides let you customize settings for subsets of tests. For example, to retry tests that contain the substring test_e2e
3 times:
[[profile.default.overrides]]
filter = "test(test_e2e)"
retries = 3
Currently, only retries
are supported. In the future, more kinds of customization will be added.
Other changes
- A new environment variable
NEXTEST_RETRIES
controls the number of retries tests are run with. In terms of precedence, this slots in between the command-line--retries
option and per-test overrides for retries. cargo nextest list
now hides skipped tests and binaries by default. To print out skipped tests and binaries, usecargo nextest list --verbose
.- The Machine-readable output for
cargo nextest list
now contains a new"status"
key. By default, this is set to"listed"
, and for binaries that aren't run because they don't match expression filters this is set to"skipped"
. - The
--platform-filter
option is deprecated, though it will keep working for all versions within the nextest 0.9 series. Use-E 'platform(host)'
or-E 'platform(target)'
instead. cargo nextest run -- --skip
and--exact
now suggest using a filter expression instead.
0.9.24 - 2022-07-01
Added
- New config option
profile.<profile-name>.test-threads
controls the number of tests run simultaneously. This option accepts either an integer with the number of threads, or the string "num-cpus" (default) for the number of logical CPUs. As usual, this option is overridden by--test-threads
andNEXTEST_TEST_THREADS
, in that order. - The command-line
--test-threads
option and theNEXTEST_TEST_THREADS
environment variable now acceptnum-cpus
as their argument. - nextest now works with cargo binstall (#332). Thanks [Remoun] for your first contribution!
Fixed
- Within JUnit XML, test failure descriptions (text nodes for
<failure>
and<error>
tags) now have invalid ANSI escape codes stripped from their output.
0.9.23 - 2022-06-26
Added
- On Windows, nextest now detects tests that abort due to e.g. an access violation (segfault) and prints their status as "ABORT" rather than "FAIL", along with an explanatory message on the next line.
- Improved JUnit support: nextest now heuristically detects stack traces and adds them to the text node of the
<failure>
element (#311).
Changed
- Errors that happen while writing data to the output now have a new documented exit code:
WRITE_OUTPUT_ERROR
.
0.9.22 - 2022-06-21
Added
-
Benchmarks are now treated as normal tests. (#283, thanks @tabokie for your contribution!).
Note that criterion.rs benchmarks are currently incompatible with nextest (#96) -- this change doesn't have any effect on that.
-
Added
-F
as a shortcut for--features
, mirroring an upcoming addition to Cargo 1.62 (#287, thanks Alexendoo for your first contribution!)
Changed
- If nextest's output is colorized, it no longer strips ANSI escape codes from test runs.
0.9.21 - 2022-06-17
Added
- On Unix, tests that fail due to a signal (e.g. SIGSEGV) will print out the name of the signal rather than the generic "FAIL".
cargo-nextest
has a new"default-no-update"
feature that will contain all default features except for self-update. If you're distributing nextest or installing it in CI, the recommended, forward-compatible way to build cargo-nextest is with--no-default-features --features default-no-update
.
Changed
- Progress bars now take up the entire width of the screen. This prevents issues with the bar wrapping around on terminals that aren't wide enough.
0.9.20 - 2022-06-13
Fixed
- Account for skipped tests when determining the length of the progress bar.
0.9.19 - 2022-06-13
Added
-
Nextest can now update itself! Once this version is installed, simply run
cargo nextest self update
to update to the latest version.Note to distributors: you can disable self-update by building cargo-nextest with
--no-default-features
. -
Partial, emulated support for test binary arguments passed in after
cargo nextest run --
(#265, thanks @tabokie for your contribution!).For example,
cargo nextest run -- my_test --ignored
will run ignored tests containingmy_test
, similar tocargo test -- my_test --ignored
.Support is limited to test names,
--ignored
and--include-ignored
.Note to integrators: to reliably disable all argument parsing, pass in
--
twice. For example,cargo nextest run -- -- <filters...>
.
Fixed
- Better detection for cross-compilation -- now look through the
CARGO_BUILD_TARGET
environment variable, and Cargo configuration as well. The--target
option is still preferred. - Slow and flaky tests are now printed out properly in the final status output (#270).
This is a test release.
0.9.18 - 2022-06-08
Added
-
Support for terminating tests if they take too long, via the configuration parameter
slow-timeout.terminate-after
. For example, to time out after 120 seconds:slow-timeout = { period = "60s", terminate-after = 2 }
Fixed
- Improved support for reusing builds: produce better error messages if the workspace's source is missing.
0.9.17 - 2022-06-07
This release contains a number of user experience improvements.
Added
- If producing output to an interactive terminal, nextest now prints out its status as a progress bar. This makes it easy to see the status of a test run at a glance.
- Nextest's configuration has a new
final-status-level
option which can be used to print out some statuses at the end of a run (defaults tonone
). On the command line, this can be overridden with the--final-status-level
argument orNEXTEST_FINAL_STATUS_LEVEL
in the environment. - If a target runner is in use, nextest now prints out its name and the environment variable or config file the definition was obtained from.
Changed
- If the creation of a test list fails, nextest now prints a more descriptive error message, and exits with the exit code 104 (
TEST_LIST_CREATION_FAILED
).
0.9.16 - 2022-06-02
Added
- Nextest now sets
NEXTEST_LD_*
andNEXTEST_DYLD_*
environment variables to work around macOS System Integrity Protection sanitization.
Fixed
- While archiving build artifacts, work around some libraries producing linked paths that don't exist (#247). Print a warning for those paths instead of failing.
Changed
- Build artifact archives no longer recurse into linked path subdirectories. This is not a behavioral change because
LD_LIBRARY_PATH
and other similar variables do not recurse into subdirectories either.
0.9.15 - 2022-05-31
Added
- Improved support for reusing builds:
- New command
cargo nextest archive
automatically archives test binaries and other relevant files after building tests. Currently the.tar.zst
format is supported. - New option
cargo nextest run --archive-file
automatically extracts archives before running the tests within them. - New runtime environment variable
NEXTEST_BIN_EXE_<name>
is set to the absolute path to a binary target's executable, taking path remapping into account. This is equivalent toCARGO_BIN_EXE_<name>
, except this is set at runtime. cargo nextest list --list-type binaries-only
now records information about non-test binaries as well.
- New command
Fixed
Fix for experimental feature filter expressions:
- Fix test filtering when expression filters are set but name-based filters aren't.
0.9.14 - 2022-04-18
Fixed
Fixes related to path remapping:
- Directories passed into
--workspace-remap
and--target-dir-remap
are now canonicalized. - If the workspace directory is remapped,
CARGO_MANIFEST_DIR
in tests' runtime environment is set to the new directory.
0.9.13 - 2022-04-16
Added
-
Support for reusing builds is now production-ready. Build on one machine and run tests on another, including cross-compiling and test partitioning.
To see how builds can be reused in GitHub Actions, see this example.
-
Experimental support for filter expressions, allowing fine-grained specifications for which tests to run.
Thanks to Guiguiprim for their fantastic work implementing both of these.
0.9.12 - 2022-03-22
Added
- Support for reading some configuration as environment variables. (Thanks ymgyt and iskyzh for their pull requests!)
- Machine-readable output for
cargo nextest list
now contains arust-build-meta
key. This key currently contains the target directory, the base output directories, and paths to search for dynamic libraries in relative to the target directory.
Fixed
- Test binaries that link to dynamic libraries built by Cargo now work correctly (#82).
- Crates with no tests are now skipped while computing padding widths in the reporter (#125).
Changed
- MSRV updated to Rust 1.56.
- For experimental feature reusing builds:
- Change
--binaries-dir-remap
to--target-dir-remap
and expect that the entire target directory is archived. - Support linking to dynamic libraries (#82).
- Change
0.9.11 - 2022-03-09
Fixed
- Update
regex
to 1.5.5 to address GHSA-m5pq-gvj9-9vr8 (CVE-2022-24713).
0.9.10 - 2022-03-07
Thanks to Guiguiprim for their contributions to this release!
Added
- A new
--platform-filter
option filters tests by the platform they run on (target or host). cargo nextest list
has a new--list-type
option, with valuesfull
(the default, same as today) andbinaries-only
(list out binaries without querying them for the tests they contain).- Nextest executions done as a separate process per test (currently the only supported method, though this might change in the future) set the environment variable
NEXTEST_PROCESS_MODE=process-per-test
.
New experimental features
- Nextest can now reuse builds across invocations and machines. This is an experimental feature, and feedback is welcome in #98!
Changed
- The target runner is now build-platform-specific; test binaries built for the host platform will be run by the target runner variable defined for the host, and similarly for the target platform.
0.9.9 - 2022-03-03
Added
- Updates for Rust 1.59:
- Support abbreviating
--release
as-r
(Cargo #10133). - Stabilize future-incompat-report (Cargo #10165).
- Update builtin list of targets (used by the target runner) to Rust 1.59.
- Support abbreviating
0.9.8 - 2022-02-23
Fixed
- Target runners of the form
runner = ["bin-name", "--arg1", ...]
are now parsed correctly (#75). - Binary IDs for
[[bin]]
and[[example]]
tests are now unique, in the format<crate-name>::bin/<binary-name>
and<crate-name>::test/<binary-name>
respectively (#76).
0.9.7 - 2022-02-23
Fixed
- If parsing target runner configuration fails, warn and proceed without a target runner rather than erroring out.
Known issues
- Parsing an array of strings for the target runner currently fails: #73. A fix is being worked on in #75.
0.9.6 - 2022-02-22
Added
- Support Cargo configuration for target runners.
0.9.5 - 2022-02-20
Fixed
- Updated nextest-runner to 0.1.2, fixing cyan coloring of module paths (#52).
0.9.4 - 2022-02-16
The big new change is that release binaries are now available! Head over to Pre-built binaries for more.
Added
- In test output, module paths are now colored cyan (#42).
Fixed
- While querying binaries to list tests, lines ending with ": benchmark" will now be ignored (#46).
0.9.3 - 2022-02-14
Fixed
- Add a
BufWriter
around stderr for the reporter, reducing the number of syscalls and fixing issues around output overlap on Windows (#35). Thanks @fdncred for reporting this!
0.9.2 - 2022-02-14
Fixed
- Running cargo nextest from within a crate now runs tests for just that crate, similar to cargo test. Thanks Yaron Wittenstein for reporting this!
0.9.1 - 2022-02-14
Fixed
- Updated nextest-runner to 0.1.1, fixing builds on Rust 1.54.
0.9.0 - 2022-02-14
Initial release. Happy Valentine's day!
Added
Supported in this initial release:
- Listing tests
- Running tests in parallel for faster results
- Partitioning tests across multiple CI jobs
- Test retries and flaky test detection
- JUnit support for integration with other test tooling