Skip to content

Leaky tests

Some tests create subprocesses but may not clean them up properly. Typical scenarios include:

  • A test creates a server process to test against, but does not shut it down at the end of the test.
  • A test starts a subprocess with the intent to shut it down, but panics, and does not use the RAII pattern to clean up subprocesses.
  • This can happen transitively as well: a test creates a process which creates its own subprocess, and so on.

Nextest can detect some, but not all, such situations. If nextest detects a subprocess leak, it marks the corresponding test as leaky.

Leaky tests nextest detects

Currently, nextest is limited to detecting subprocesses that inherit standard output or standard error from the test. For example, here's a test that nextest will mark as leaky.

#[test]
fn test_subprocess_doesnt_exit() {
    let mut cmd = std::process::Command::new("sleep");
    cmd.arg("120");
    cmd.spawn().unwrap();
}

For this test, nextest will output something like:


    Starting 1 tests across 8 binaries (24 skipped)
        LEAK [   0.103s] nextest-tests::basic test_subprocess_doesnt_exit
------------
     Summary [   0.103s] 1 tests run: 1 passed (1 leaky), 24 skipped

Leaky tests that nextest currently does not detect

Tests which spawn subprocesses that do not inherit either standard output or standard error are not currently detected by nextest. For example, the following test is not currently detected as leaky:

#[test]
fn test_subprocess_doesnt_exit_2() {
    let mut cmd = std::process::Command::new("sleep");
    cmd.arg("120")
        .stdout(std::process::Stdio::null())
        .stderr(std::process::Stdio::null());
    cmd.spawn().unwrap();
}

Detecting such tests is a very difficult problem to solve, particularly on Unix platforms.

Note: This section is not part of nextest's stability guarantees. In the future, these tests might get marked as leaky by nextest.

Configuring the leak timeout

Nextest waits a specified amount of time (by default 100 milliseconds) after the test exits for standard output and standard error to be closed. In rare cases, you may need to configure the leak timeout.

To do so, use the leak-timeout configuration parameter. For example, to wait up to 500 milliseconds after the test exits, add this to .config/nextest.toml:

[profile.default]
leak-timeout = "500ms"

Nextest also supports per-test overrides for the leak timeout.

Marking leaky tests as failures

0.9.95

By default, leaky tests are considered to be successful. You can choose to mark these tests as failures instead:

[profile.default]
leak-timeout = { period = "500ms", result = "fail" }

A failure caused by leaked handles will be marked as LEAK-FAIL:

 Nextest run ID 676e92cc-cf6d-4d31-8794-fe9999813e75 with nextest profile: default
    Starting 1 test across 7 binaries (26 tests skipped)
   LEAK-FAIL [   0.104s] nextest-tests::basic test_subprocess_doesnt_exit_leak_fail
  stdout ───

    running 1 test
    test test_subprocess_doesnt_exit_leak_fail ... ok

    test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 19 filtered out; finished in 0.00s
    

    (test failed: exited with code 0, but leaked handles)

────────────
     Summary [   0.105s] 1 test run: 0 passed, 1 failed (1 due to being leaky), 26 skipped
 Nextest run ID 676e92cc-cf6d-4d31-8794-fe9999813e75 with nextest profile: default
    Starting 1 test across 7 binaries (26 tests skipped)
   LEAK-FAIL [   0.104s] nextest-tests::basic test_subprocess_doesnt_exit_leak_fail
  stdout ───

    running 1 test
    test test_subprocess_doesnt_exit_leak_fail ... ok

    test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 19 filtered out; finished in 0.00s


    (test failed: exited with code 0, but leaked handles)

────────────
     Summary [   0.105s] 1 test run: 0 passed, 1 failed (1 due to being leaky), 26 skipped