test
pants test [args]
Run tests.
Backend: pants.core
Config section: [test]
Basic options
attempts_default
--test-attempts-default=<int>
PANTS_TEST_ATTEMPTS_DEFAULT
[test]
attempts_default = <int>
1
The number of attempts to run tests, in case of a test failure. Tests that were retried will include the number of attempts in the summary output.
debug
--[no-]test-debug
PANTS_TEST_DEBUG
[test]
debug = <bool>
False
Run tests sequentially in an interactive process. This is necessary, for example, when you add breakpoints to your code.
debug_adapter
--[no-]test-debug-adapter
PANTS_TEST_DEBUG_ADAPTER
[test]
debug_adapter = <bool>
False
Run tests sequentially in an interactive process, using a Debug Adapter (https://microsoft.github.io/debug-adapter-protocol/) for the language if supported.
The interactive process used will be immediately blocked waiting for a client before continuing.
This option implies --debug
.
extra_env_vars
--test-extra-env-vars="['<str>', '<str>', ...]"
PANTS_TEST_EXTRA_ENV_VARS
[test]
extra_env_vars = [
'<str>',
'<str>',
...,
]
[]
Additional environment variables to include in test processes. Entries are strings in the form ENV_VAR=value
to use explicitly; or just ENV_VAR
to copy the value of a variable in Pants's own environment.
test_extra_env_vars
on local_environment
, docker_environment
, or remote_environment
targets.force
--[no-]test-force
PANTS_TEST_FORCE
[test]
force = <bool>
False
Force the tests to run, even if they could be satisfied from cache.
open_coverage
--[no-]test-open-coverage
PANTS_TEST_OPEN_COVERAGE
[test]
open_coverage = <bool>
False
If a coverage report file is generated, open it on the local system if the system supports this.
output
--test-output=<ShowOutput>
PANTS_TEST_OUTPUT
[test]
output = <ShowOutput>
all, failed, none
default:
failed
Show stdout/stderr for these tests.
shard
--test-shard=<str>
PANTS_TEST_SHARD
[test]
shard = <str>
A shard specification of the form "k/N", where N is a positive integer and k is a non-negative integer less than N.
If set, the request input targets will be deterministically partitioned into N disjoint subsets of roughly equal size, and only the k'th subset will be used, with all others discarded.
Useful for splitting large numbers of test files across multiple machines in CI. For example, you can run three shards with --shard=0/3
, --shard=1/3
, --shard=2/3
.
Note that the shards are roughly equal in size as measured by number of files. No attempt is made to consider the size of different files, the time they have taken to run in the past, or other such sophisticated measures.
timeouts
--[no-]test-timeouts
PANTS_TEST_TIMEOUTS
[test]
timeouts = <bool>
True
Enable test target timeouts. If timeouts are enabled then test targets with a timeout=
parameter set on their target will time out after the given number of seconds if not completed. If no timeout is set, then either the default timeout is used or no timeout is configured.
use_coverage
--[no-]test-use-coverage
PANTS_TEST_USE_COVERAGE
[test]
use_coverage = <bool>
False
Generate a coverage report if the test runner supports it.
Advanced options
batch_size
--test-batch-size=<int>
PANTS_TEST_BATCH_SIZE
[test]
batch_size = <int>
128
The target maximum number of files to be included in each run of batch-enabled test runners.
Some test runners can execute tests from multiple files in a single run. Test implementations will return all tests that can run together as a single group - and then this may be further divided into smaller batches, based on this option. This is done:
- to avoid OS argument length limits (in processes which don't support argument files)
- to support more stable cache keys than would be possible if all files were operated on in a single batch
- to allow for parallelism in test runners which don't have internal parallelism, or -- if they do support internal parallelism -- to improve scheduling behavior when multiple processes are competing for cores and so internal parallelism cannot be used perfectly
In order to improve cache hit rates (see 2.), batches are created at stable boundaries, and so this value is only a "target" max batch size (rather than an exact value).
NOTE: This parameter has no effect on test runners/plugins that do not implement support for batched testing.
experimental_report_test_result_info
--[no-]test-experimental-report-test-result-info
PANTS_TEST_EXPERIMENTAL_REPORT_TEST_RESULT_INFO
[test]
experimental_report_test_result_info = <bool>
False
Report information about the test results.
For now, it reports only the source from where the test results were fetched. When running tests, they may be executed locally or remotely, but if there are results of previous runs available, they may be retrieved from the local or remote cache, or be memoized. Knowing where the test results come from might be useful when evaluating the efficiency of the cache and the nature of the changes in the source code that may lead to frequent cache invalidations.
report
--[no-]test-report
PANTS_TEST_REPORT
[test]
report = <bool>
False
Write test reports to --report-dir
.
report_dir
--test-report-dir=<str>
PANTS_TEST_REPORT_DIR
[test]
report_dir = <str>
{distdir}/test/reports
Path to write test reports to. Must be relative to the build root.