test
Run tests with Pytest.
Pants uses the popular Pytest test runner to run Python tests. You may write your tests in Pytest-style, unittest-style, or mix and match both.
Each file gets run as a separate process, which gives you fine-grained caching and better parallelism. Given enough cores, Pants will be able to run all your tests at the same time.
This also gives you fine-grained caching. If you run ./pants test ::
, and then you only change one file, then only tests that depended on that changed file will need to rerun.
Examples
# Run all tests in the repository.
./pants test ::
# Run all the tests in this target.
./pants test helloworld/util:test
# Run just the tests in this file.
./pants test helloworld/util/lang_test.py
# Run just one test.
./pants test helloworld/util/lang_test.py -- -k test_language_translator
Pytest version and plugins
To change the Pytest version, set the version
option in the [pytest]
scope.
To install any plugins, add the pip requirement string to pytest_plugins
in the [pytest]
scope, like this:
[pytest]
version = "pytest>=5.4"
pytest_plugins.add = [
"pytest-django>=3.9.0,<4",
"pytest-rerunfailures==9.0",
]
Alternatively, if you only want to install the plugin for certain tests, you can add the plugin to the dependencies
field of your python_tests
. See Third-party dependencies for how to install Python dependencies. For example:
- requirements.txt
- helloworld/util/BUILD
pytest-django==3.10.0
python_tests(
name="tests",
# Normally, Pants infers dependencies based on imports.
# Here, we don't actually import our plugin, though, so
# we need to explicitly list it.
dependencies=["//:pytest-django"],
)
By default, Pants uses Pytest 6.x, which only supports Python 3 code. If you need to run Python 2 tests, set the option version
in the scope pytest
to pytest>=4.6,<5
.
pytest-xdist
pluginWe do not recommend using this plugin because its concurrency conflicts with Pants' own parallelism. Using Pants will bring you similar benefits to pytest-xdist
already: Pants will run each test target in parallel.
Add pytest-icdiff
and pygments
to the option pytest_plugins
for better error messages and diffs from Pytest.
Controlling output
By default, Pants only shows output for failed tests. You can change this by setting --test-output
to one of all
, failed
, or never
, e.g. ./pants test --output=all ::
.
You can permanently set the output format in your pants.toml
like this:
[test]
output = "all"
See "Passing arguments to Pytest".
For example:
$ ./pants test project/app_test.py -- -q
You may want to permanently set the Pytest option --no-header
to avoid printing the Pytest version for each test run:
[pytest]
args = ["--no-header"]
Passing arguments to Pytest
To pass arguments to Pytest, put them at the end after --
, like this:
$ ./pants test project/app_test.py -- -k test_function1 -vv -s
You can also use the args
option in the [pytest]
scope, like this:
[pytest]
args = ["-vv"]
See https://docs.pytest.org/en/latest/usage.html for more information.
-k expression
: only run tests matching the expression.-v
: verbose mode.-s
: always print the stdout and stderr of your code, even if a test passes.
--pdb
optionYou must run ./pants test --debug
for this to work properly. See the section "Running tests interactively" for more information.
Setting environment variables
Test runs are hermetic, meaning that they are stripped of the parent ./pants
process's environment variables. This is important for reproducibility, and it also increases cache hits.
To add any arbitrary environment variable back to the process, use the option extra_env_vars
in the [test]
options scope. You can hardcode a value for the option, or leave off a value to "allowlist" it and read from the parent ./pants
process's environment.
[test]
extra_env_vars = ["VAR1", "VAR2=hardcoded_value"]
Ignore the cache with --force
To force your tests to run again, rather than reading from the cache, run ./pants test --force path/to/test.py
.
Running tests interactively
Because Pants runs multiple test targets in parallel, you will not see your test results appear on the screen until the test has completely finished. This means that you cannot use debuggers normally; the breakpoint will never show up on your screen and the test will hang indefinitely (or timeout, if timeouts are enabled).
Instead, if you want to run a test interactively—such as to use a debugger like pdb
—run your tests with ./pants test --debug
. For example:
- test_debug_example.py
- Shell
def test_debug():
import pdb; pdb.set_trace()
assert 1 + 1 == 2
$ <<pantscmd>> test --debug test_debug_example.py
===================================================== test session starts =====================================================
platform darwin -- Python 3.6.10, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /private/var/folders/sx/pdpbqz4x5cscn9hhfpbsbqvm0000gn/T/.tmpn2li0z
plugins: cov-2.8.1, timeout-1.3.4
collected 6 items
test_debug_example.py
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB set_trace (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /private/var/folders/sx/pdpbqz4x5cscn9hhfpbsbqvm0000gn/T/.tmpn2li0z/test_debug_example.py(11)test_debug()
-> assert 1 + 1 == 2
(Pdb) 1 + 1
2
If you use multiple files with test --debug
, they will run sequentially rather than in parallel.
ipdb
in testsipdb
integrates IPython with the normal pdb
debugger for enhanced features like autocomplete and improved syntax highlighting. ipdb
is very helpful when debugging tests.
To be able to access ipdb
when running tests, add this to your pants.toml
:
[pytest]
pytest_plugins.add = ["ipdb"]
Then, you can use import ipdb; ipdb.set_trace()
in your tests.
To run the tests you will need to add -- -s
to the test call. Since ipdb will need stdin and pytest will capture it.
./pants test --debug <target> -- -s
First, add the following target in some BUILD file (e.g., the one containing your other 3rd-party dependencies):
python_requirement_library(
name = "pydevd-pycharm",
requirements=["pydevd-pycharm==203.5419.8"], # Or whatever version you choose.
)
You can check this into your repo, for convenience.
Now, use the remote debugger as usual:
-
Start a Python remote debugging session in PyCharm, say on port 5000.
-
Add the following code at the point where you want execution to pause and connect to the debugger:
import pydevd_pycharm
pydevd_pycharm.settrace('localhost', port=5000, stdoutToServer=True, stderrToServer=True)
Run your test with ./pants test --debug
as usual.
Note: The first time you do so you may see some extra dependency resolution work, as pydevd-pycharm
has now been added to the test's dependencies, via inference. If you have dependency inference turned off in your repo, you will have to manually add a temporary explicit dependency in your test target on the pydevd-pycharm
target.
Using timeouts
Pants can cancel tests which take too long. This is useful to prevent tests from hanging indefinitely.
To add a timeout for a particular python_tests
target, set the timeout
field to an integer value of seconds, like this:
python_tests(
name="tests",
timeout=120, # seconds.
)
This timeout will apply to each file belonging to the python_tests
target, meaning that test_f1.py
will have a timeout of 120 seconds and test_f2.py
will have a timeout of 120 seconds. If you want finer-grained timeouts, create a dedicated python_tests
target for each file:
python_tests(
name="test_f1",
sources=["test_f1.py"],
timeout=20,
)
python_tests(
name="test_f2",
sources=["test_f2.py"],
timeout=35,
)
You can also set a default value and a maximum value in pants.toml
:
[pytest]
timeout_default = 60
timeout_maximum = 600
If a target sets its timeout
higher than --pytest-timeout-maximum
, Pants will use the value in --pytest-timeout-maximum
.
When debugging locally, such as with pdb
, you might want to temporarily disable timeouts. To do this, set --no-pytest-timeouts
:
$ ./pants test project/app_test.py --no-pytest-timeouts
conftest.py
Pytest uses conftest.py
files to share fixtures and config across multiple distinct test files.
The default sources
value for a python_tests
target includes conftest.py
. So, if you declare a BUILD file like this, the conftest.py
will be included:
python_tests(
name="tests",
timeout=120,
# We leave off `sources` to use the default value.
)
Otherwise, you can explicitly include the conftest.py
in the sources
field of a python_tests()
target.
Pants will also infer dependencies on any confest.py
files in the current directory and any ancestor directories, which mirrors how Pytest behaves. This requires that each conftest.py
has a target referring to it. You can verify this is working correctly by running ./pants dependencies path/to/my_test.py
and confirming that each conftest.py
file shows up. (You can turn off this feature by setting conftests = false
in the [python-infer]
scope.)
Depending on test utilities, resources, and built packages
Depending on test utilities
Use the target type python_library
for test utilities, rather than python_tests
.
For example:
- helloworld/BUILD
- helloworld/testutils.py
- helloworld/app_test.py
python_library(
name="testutils",
sources=["testutils.py"]
)
# We leave off the `dependencies` field because Pants will infer
# it based on import statements.
python_tests(name="tests")
...
@contextmanager
def setup_tmpdir(files: Mapping[str, str]) -> Iterator[str]:
with temporary_dir() as tmpdir:
...
yield rel_tmpdir
from helloworld.testutils import setup_tmpdir
def test_app() -> None:
with setup_tmpdir({"f.py": "print('hello')"}):
assert ...
Depending on resources
Refer to Resources for how to include resource files in your tests.
It's often most convenient to use files
and relocated_files
targets in your test code, although you can also use resources
.
Depending on packages
For integration tests, you may want to include the result of ./pants package
in your test, such as a generated .pex
file. You can then, for example, use it with subprocess.run()
or unzip the package.
To depend on a built package, use the runtime_package_dependencies
field on the python_tests
target, which is a list of addresses to targets that can be built with ./pants package
, such as pex_binary
, python_awslambda
, and archive
targets. Pants will build the package before running your test, and insert the file into the test's chroot. It will use the same name it would normally use with ./pants package
, except without the dist/
prefix.
For example:
- helloworld/BUILD
- helloworld/say_hello.py
- helloworld/test_binary.py
# This target teaches Pants about our non-test Python files, so that
# dependency inference will work for the pex_binary target.
python_library(
name="lib",
)
pex_binary(
name="bin",
entry_point="say_hello.py",
)
python_tests(
name="tests",
runtime_package_dependencies=[":bin"],
)
print("Hello, test!")
import subprocess
def test_say_hello():
assert b"Hello, test!" in subprocess.run(args=['helloworld/bin.pex'])
Coverage
To report coverage using Coverage.py
, set the option --test-use-coverage
:
$ ./pants test --use-coverage helloworld/util/lang_test.py
Or to permanently use coverage, set in your config file:
[test]
use_coverage = true
Coverage defaults to running with Python 3.6+ when generating a report, which means it may fail to parse Python 2 syntax and Python 3.8+ syntax. You can fix this by changing the interpreter constraints for running Coverage:
# pants.toml
[coverage-py]
interpreter_constraints = [">=3.8"]
However, if your repository has some Python 2-only code and some Python 3-only code, you will not be able to choose an interpreter that works with both versions. So, you will need to set up a .coveragerc
config file and set ignore_errors = True
under [report]
, like this:
# .coveragerc
[report]
ignore_errors = True
# pants.toml
[coverage-py]
config = ".coveragerc"
ignore_errors = True
means that those files will simply be left off of the final coverage report.
There's a proposal for Pants to fix this by generating multiple reports when necessary: https://github.com/pantsbuild/pants/issues/11137. We'd appreciate your feedback.
Coverage will report data on any files encountered during the tests. You can filter down the results by using the option --coverage-py-filter
and passing the name(s) of modules you want coverage data for. Each module name is recursive, meaning submodules will be included. For example:
$ ./pants test --use-coverage helloworld/util/lang_test.py --coverage-py=helloworld.util
$ ./pants test --use-coverage helloworld/util/lang_test.py --coverage-py='["helloworld.util.lang", "helloworld.util.lang_test"]'
Coverage will only report on files encountered during the tests' run. This means that your coverage score may be misleading; even with a score of 100%, you may have files without any tests.
This is a shortcoming of Coverage itself.
Pants will default to writing the results to the console, but you can also output in HTML, XML, JSON, or the raw SQLite file:
[coverage-py]
report = ["raw", "xml", "html", "json", "console"]
You can change the output dir with the output_dir
option in the [coverage-py]
scope.
You may use a custom .coveragerc
config file by setting the option coverage
in the [coverage-py]
scope. You must include relative_files = True
in the [run]
section for Pants to work.
- pants.toml
- .coveragerc
[coverage-py]
config = ".coveragerc"
[run]
relative_files = True
branch = True
When generating HTML, XML, and JSON reports, you can automatically open the reports through the option --test-open-coverage
.
Saving JUnit XML results
Pytest can generate JUnit XML result files. This allows you to hook up your results, for example, to dashboards.
To save JUnit XML result files, set the option junit_xml_dir
in the [pytest]
scope, like this:
[pytest]
junit_xml_dir = "dist/pytest_results"
You may also want to set the option junit_family
in the [pytest]
scope to change the format. Run ./pants help-advanced pytest
for more information.