Testing plugins
How to verify your plugin works.
There are four main approaches to testing your plugin, ranging in terms of scope (unit vs. integration test). You may mix-and-match between these approaches.
All approaches use Pytest-style tests, rather than unittest
-style tests.
You must also install the distribution pantsbuild.pants.testutil
. We recommend using the pants_requirements
target to do this.
Approach 1: normal unit tests
Often, you can factor out normal Python functions from your plugin that do not use the Rules API. These helpers can be tested like you would test any other Python code.
For example, some Pants rules take the type InterpreterConstraints
as input. InterpreterConstraints
has a factory method merge_constraint_sets()
that we can test through a normal unit test.
def test_merge_interpreter_constraints() -> None:
# A & B => A & B
assert InterpreterConstraints.merge_constraint_sets(
[["CPython==2.7.*"], ["CPython==3.6.*"]]
) == ["CPython==2.7.*,==3.6.*"]
# A | B => A | B
assert InterpreterConstraints.merge_constraint_sets(
[["CPython==2.7.*", "CPython==3.6.*"]]
) == ["CPython==2.7.*", "CPython==3.6.*"]
This approach can be especially useful for testing the Target API, such as testing custom validation you added to a Field
.
def test_timeout_validation() -> None:
with pytest.raises(InvalidFieldException):
PythonTestTimeoutField(-100, Address("demo"))
with pytest.raises(InvalidFieldException):
PythonTestTimeoutField(0, Address("demo"))
assert PythonTestTimeoutField(5, Address("demo")).value == 5
Target
in-memoryFor Approaches #1 and #2, you will often want to pass a Target
instance to your test, such as a PythonTestTarget
instance.
To create a Target
instance, choose which subclass you want, then pass a dictionary of the values you want to use, followed by an Address
object. The dictionary corresponds to what you'd put in the BUILD file; any values that you leave off will use their default values.
The Address
constructor's first argument is the path to the BUILD file; you can optionally define target_name: str
if it is not the default name
.
For example, given this target definition for project/app:tgt
:
python_test(
name="tgt",
source="app_test.py",
timeout=120,
)
We would write:
tgt = PythonTestTarget(
{"source": "app_test.py", "timeout": 120},
Address("project/app", target_name="tgt"),
)
Note that we did not put "name": "tgt"
in the dictionary. name
is a special field that does not use the Target API. Instead, pass the name
to the target_name
argument in the Address
constructor.
For Approach #3, you should instead use rule_runner.write_files()
to write a BUILD file, followed by rule_runner.get_target()
.
For Approach #4, you should use setup_tmpdir()
to set up BUILD files.
Approach 2: run_rule_with_mocks()
(unit tests for rules)
run_rule_with_mocks()
will run your rule's logic, but with each argument to your @rule
provided explicitly by you and with mocks for any await Get
s. This means that the test is fully mocked; for example, run_rule_with_mocks()
will not actually run a Process
, nor will it use the file system operations. This is useful when you want to test the inlined logic in your rule, but usually, you will want to use Approach #3.
To use run_rule_with_mocks
, pass the @rule
as its first arg, then rule_args=[arg1, arg2, ...]
in the same order as the arguments to the @rule
.
For example:
from pants.engine.rules import rule
from pants.testutil.rule_runner import run_rule_with_mocks
@rule
async def int_to_str(i: int) -> str:
return str(i)
def test_int_to_str() -> None:
result: str = run_rule_with_mocks(int_to_str, rule_args=[42], mock_gets=[])
assert result == "42"
If your @rule
has any await Get
s or await Effect
s, set the argument mock_gets=[]
with MockGet
/MockEffect
objects corresponding to each of them. A MockGet
takes three arguments: output_type: type
, input_types: tuple[type, ...]
, and mock: Callable[..., InputType]
, which is a function that takes an instance of each of the input_types
and returns a single instance of the output_type
.
For example, given this contrived rule to find all targets with sources
with a certain filename included (find a "needle in the haystack"):
from __future__ import annotations
from dataclasses import dataclass
from pathlib import PurePath
from pants.engine.collection import Collection
from pants.engine.rules import Get, MultiGet, rule
from pants.engine.target import HydratedSources, HydrateSourcesRequest, SourcesField, Target
@dataclass(frozen=True)
class FindNeedle:
"""A request to find all targets with a `sources` file matching the `needle_filename`."""
targets: tuple[Target, ...]
needle_filename: str
# We want to return a sequence of found `Target` objects. Rather than
# returning `Targets`, we create a "newtype" specific to this rule.
class TargetsWithNeedle(Collection[Target]):
pass
@rule
async def find_needle_in_haystack(find_needle: FindNeedle) -> TargetsWithNeedle:
all_hydrated_sources = await MultiGet(
[Get(HydratedSources, HydrateSourcesRequest(tgt.get(SourcesField))) for tgt in find_needle.targets]
)
return TargetsWithNeedle(
tgt
for tgt, hydrated_sources in zip(find_needle.targets, all_hydrated_sources)
if any(PurePath(fp).name == find_needle.needle_filename for fp in hydrated_sources.snapshot.files)
)
We can write this test:
from pants.engine.addresses import Address
from pants.engine.fs import EMPTY_DIGEST, Snapshot
from pants.engine.target import HydratedSources, HydrateSourcesRequest, Target, Sources
from pants.testutil.rule_runner import MockGet, run_rule_with_mocks
class MockTarget(Target):
alias = "mock_target"
core_fields = (Sources,)
def test_find_needle_in_haystack() -> None:
tgt1 = MockTarget({}, Address("", target_name="t1"))
tgt2 = MockTarget({}, Address("", target_name="t2"))
tgt3 = MockTarget({}, Address("", target_name="t3"))
find_needles_request = FindNeedle(targets=(tgt1, tgt2, tgt3), needle_filename="needle.txt")
def mock_hydrate_sources(request: HydrateSourcesRequest) -> HydratedSources:
# Our rule only looks at `HydratedSources.snapshot.files`, so we mock all other fields. We
# include the file `needle.txt` for the target `:t2`, but no other targets.
files = (
("needle.txt", "foo.txt")
if request.field.address.target_name == "t2"
else ("foo.txt", "bar.txt")
)
mock_snapshot = Snapshot(EMPTY_DIGEST, files=files, dirs=())
return HydratedSources(mock_snapshot, filespec={}, sources_type=None)
result: TargetsWithNeedle = run_rule_with_mocks(
find_needle_in_haystack,
rule_args=[find_needles_request],
mock_gets=[
MockGet(
output_type=HydratedSources,
input_types=(HydrateSourcesRequest,),
mock=mock_hydrate_sources,
)
],
)
assert list(result) == [tgt2]
How to mock some common types
See the above tooltip about how to create a Target
instance.
If your rule takes a Subsystem
or GoalSubsystem
as an argument, you can use the utilities create_subsystem
and create_goal_subsystem
like below. Note that you must explicitly provide all options read by your @rule
; the default values will not be used.
from pants.backend.python.subsystems.setup import PythonSetup
from pants.core.goals.fmt import FmtSubsystem
from pants.testutil.option_util import create_goal_subsystem, create_subsystem
mock_subsystem = create_subsystem(PythonSetup, interpreter_constraints=["CPython==3.8.*"])
mock_goal_subsystem = create_goal_subsystem(FmtSubsystem, sep="\n")
If your rule takes Console
as an argument, you can use the with_console
context manager like this:
from pants.testutil.option_util import create_options_bootstrapper
from pants.testutil.rule_runner import mock_console, run_rule_with_mocks
def test_with_console() -> None:
with mock_console(create_options_bootstrapper()) as (console, stdio_reader):
result: MyOutputType = run_rule_with_mocks(my_rule, [..., console])
assert stdio_reader.get_stdout() == "expected stdout"
assert not stdio_reader.get_stderr()
If your rule takes Workspace
as an argument, first create a pants.testutil.rule_runner.RuleRunner()
instance in your individual test. Then, create a Workspace
object with Workspace(rule_runner.scheduler)
.
Approach 3: RuleRunner
(integration tests for rules)
RuleRunner
allows you to run rules in an isolated environment, i.e. where you set up the rule graph and registered target types exactly how you want. RuleRunner
will set up your rule graph and create a temporary build root. This is useful for integration tests that are more isolated and faster than Approach #4.
After setting up your isolated environment, you can run rule_runner.request(Output, [input1, input2])
, e.g. rule_runner.request(SourceFiles, [SourceFilesRequest([sources_field])])
or rule_runner.request(TargetsWithNeedle, [FindNeedle(targets, "needle.txt"])
. This will cause Pants to "call" the relevant @rule
to get the output type.
Setting up the RuleRunner
First, you must set up a RuleRunner
instance and activate the rules and target types you'll use in your tests. Set the argument target_types
with a list of the Target
types used in your tests, and set rules
with a list of all the rules used transitively.
This means that you must register the rules you directly wrote, and also any rules that they depend on. Pants will automatically register some core rules for you, but leaves off most of them for better isolation of tests. If you're missing some rules, the rule graph will fail to be built.
It can be confusing figuring out what's wrong when setting up a RuleRunner
. We know the error messages are not ideal and are working on improving them.
Please feel free to reach out on Slack for help with figuring out how to get things working.
from pants.backend.python.goals import pytest_runner
from pants.backend.python.goals.pytest_runner import PythonTestFieldSet
from pants.backend.python.util_rules import pex_from_targets
from pants.backend.python.target_types import PythonSourceTarget, PythonTestTarget
from pants.core.goals.test import TestResult
from pants.testutil.rule_runner import QueryRule, RuleRunner
def test_example() -> None:
rule_runner = RuleRunner(
target_types=[PythonSourceTarget, PythonTestTarget],
rules=[
*pytest_runner.rules(),
*pex_from_targets.rules(),
QueryRule(TestResult, [PythonTestFieldSet])
],
)
What's with the QueryRule
? Normally, we don't use QueryRule
because we're using the asynchronous version of the Rules API, and Pants is able to parse your Python code to see how your rules are used. However, with tests, we are using the synchronous version of the Rules API, so we need to give a hint to the engine about what requests we're going to make. Don't worry about filling in the QueryRule
part yet. You'll add it later when writing rule_runner.request()
.
Each test should create its own distinct RuleRunner
instance. This is important for isolation between each test.
It's often convenient to define a Pytest fixture in each test file. This allows you to share a common RuleRunner
setup, but get a new instance for each test.
import pytest
from pants.testutil.rule_runner import RuleRunner
@pytest.fixture
def rule_runner() -> RuleRunner:
return RuleRunner(target_types=[PythonSourceTarget], rules=[rule1, rule2])
def test_example1(rule_runner: RuleRunner) -> None:
rule_runner.write_files(...)
...
def test_example2(rule_runner: RuleRunner) -> None:
rule_runner.write_files(...)
...
If you want multiple distinct RuleRunner
setups in your file, you can define multiple Pytest fixtures.
import pytest
from pants.testutil.rule_runner import RuleRunner
@pytest.fixture
def first_rule_runner() -> RuleRunner:
return RuleRunner(rules=[rule1, rule2])
def test_example1(first_rule_runner: RuleRunner) -> None:
first_rule_runner.write_files(...)
...
def test_example2(first_rule_runner: RuleRunner) -> None:
first_rule_runner.write_files(...)
...
@pytest.fixture
def second_rule_runner() -> RuleRunner:
return RuleRunner(rules=[rule3])
def test_example3(second_rule_runner: RuleRunner) -> None:
second_rule_runner.write_files(...)
...
Setting up the content and BUILD files
For most tests, you'll want to create files and BUILD files in your temporary build root. Use rule_runner.write_files(files: dict[str, str])
.
from pants.testutil.rule_runner import RuleRunner
def test_example() -> None:
rule_runner = RuleRunner()
rule_runner.write_files(
{
"project/app.py": "print('hello world!')\n",
"project/BUILD": "python_library()",
}
)
This function will write the files to the correct location and also notify the engine that the files were created.
You can then use rule_runner.get_target()
to have Pants read the BUILD file and give you back the corresponding Target
.
from textwrap import dedent
from pants.engine.addresses import Address
from pants.testutil.rule_runner import RuleRunner
def test_example() -> None:
rule_runner = RuleRunner()
rule_runner.write_files({
"project/BUILD": dedent(
"""\
python_source(
name="my_tgt",
source="f.py",
""")
}
)
tgt = rule_runner.get_target(Address("project", target_name="my_tgt"))
To read any files that were created, use rule_runner.build_root
as the first part of the path to ensure that the correct directory is read.
from pants.testutil.rule_runner import RuleRunner
def test_example() -> None:
rule_runner = RuleRunner()
rule_runner.write_files({"project/app.py": "print('hello world!')\n"})
assert Path(rule_runner.build_root, "project/app.py").read_text() == "print('hello world!')\n"
Setting options
Often, you will want to set Pants options, such as activating a certain backend or setting a --config
option.
To set options, call rule_runer.set_options()
with a list of the arguments, e.g. rule_runner.set_options(["--pytest-version=pytest>=6.0"])
. Global options will need to be set when constructing the rule_runner
using the bootstrap_args
parameter. For example, bootstrap_args=["--pants-ignore=['!/.normally_ignored/']"]
will allow a test to read from a normally ignored directory, which can be useful for reading config files.
You can also set the keyword argument env: dict[str, str]
. If the option starts with PANTS_
, it will change which options Pants uses. You can include any arbitrary environment variable here; some rules use the parent Pants process to read arbitrary env vars, e.g. the --test-extra-env-vars
option, so this allows you to mock the environment in your test. Alternatively, use the keyword argument env_inherit: set[str]
to set the specified environment variables using the test runner's environment, which is useful to set values like PATH
which may vary across machines.
rule_runner.set_options()
will override any options that were previously set.You will need to register everything you want in a single call.
Running your rules
Now that you have your RuleRunner
set up, along with any options and the content/BUILD files for your test, you can test that your rules work correctly.
Unlike Approach #2, you will not explicitly say which @rule
you want to run. Instead, look at the return type of your @rule
. Use rule_runner.request(MyOutput, [input1, ...])
, where MyOutput
is the return type.
rule_runner.request()
is equivalent to how you would normally use await Get(MyOuput, Input1, input1_instance)
in a rule (See Concepts). For example, if you would normally say await Get(Digest, MergeDigests([digest1, digest2])
, you'd instead say rule_runner.request(Digest, [MergeDigests([digest1, digest2])
.
You will also need to add a QueryRule
to your RuleRunner
setup, which gives a hint to the engine for what requests you are going to make. The QueryRule
takes the same form as your rule_runner.request()
, except that the inputs are types, rather than instances of those types.
For example, given this rule signature (from the above Approach #2 example):
@rule
async def find_needle_in_haystack(find_needle: FindNeedle) -> TargetsWithNeedle:
...
We could write this test:
from pants.core.target_types import FileTarget
from pants.testutil.rule_runner import QueryRule, RuleRunner
@pytest.fixture
def rule_runner() -> RuleRunner:
return RuleRunner(
rules=[
find_needle_in_haystack,
QueryRule(TargetsWithNeedle, [FindNeedle]),
],
target_types=[FileTarget],
)
def test_find_needle(rule_runner: RuleRunner) -> None:
# Set up the files and targets.
rule_runner.write_files(
{
"project/f1.txt": "",
"project/f2.txt": "",
"project/needle.txt": "",
"project/BUILD": dedent(
"""\
file(name="t1", source="f1.txt")
file(name="t2", source="f2.txt")
file(name="t3", source="needle.txt")
"""
),
}
)
tgt1 = rule_runner.get_target(Address("project", target_name="t1"))
tgt2 = rule_runner.get_target(Address("project", target_name="t2"))
tgt3 = rule_runner.get_target(Address("project", target_name="t3"))
# Run our rule.
find_needle_request = FindNeedle((tgt1, tgt2, tgt3), needle="needle.txt")
result = rule_runner.request(TargetsWithNeedle, [find_needle_request])
assert list(result) == [tgt3]
Given this rule signature for running the linter Bandit:
@rule
async def bandit_lint(
request: BanditRequest, bandit: Bandit, python_setup: PythonSetup
) -> LintResults:
...
We can write a test like this:
from pants.core.goals.lint import LintResult, LintResults
from pants.testutil.rule_runner import QueryRule, RuleRunner
@pytest.fixture
def rule_runner() -> RuleRunner:
return RuleRunner(
rules=[
*bandit_rules(),
QueryRule(LintResults, [BanditRequest]),
],
target_types=[PythonSourceTarget]
)
def test_bandit(rule_runner: RuleRunner) -> None:
# Set up files and targets.
rule_runner.write_files(...)
...
# Run Bandit rule.
bandit_request = BanditRequest(...)
lint_results = rule_runner.request(LintResults, [bandit_request])
Note that our @rule
takes 3 parameters, but we only explicitly included BanditRequest
in the inputs. This is possible because the engine knows how to compute all Subsystems based on the initial input to the graph. See Concepts.
We are happy to help figure out what rules to register, and what inputs to pass to rule_runner.request()
. It can also help to visualize the rule graph when running your code in production. If you're missing an input that you need, the engine will error explaining that there is no way to compute your OutputType
.
Testing @goal_rule
s
You can run @goal_rule
s by using rule_runner.run_goal_rule()
. The first argument is your Goal
subclass, such as Filedeps
or Lint
. Usually, you will set args: Iterable[str]
by giving the specs for the targets/files you want to run on, and sometimes passing options for your goal like --transitive
. If you need to also set global options that do not apply to your specific goal, set global_args: Iterable[str]
.
run_goal_rule()
will return a GoalRuleResult
object, which has the fields exit_code: int
, stdout: str
, and stderr: str
.
For example, to test the filedeps
goal:
import pytest
from pants.backend.project_info import filedeps
from pants.backend.project_info.filedeps import Filedeps
from pants.engine.target import Dependencies, SingleSourceField, Target
from pants.testutil.rule_runner import RuleRunner
# We create a mock `Target` for better isolation of our tests. We could have
# instead used a pre-defined target like `PythonLibrary` or `Files`.
class MockTarget(Target):
alias = "tgt"
core_fields = (SingleSourceField, Dependencies)
@pytest.fixture
def rule_runner() -> RuleRunner:
return RuleRunner(rules=filedeps.rules(), target_types=[MockTarget])
def test_one_target_one_source(rule_runner: RuleRunner) -> None:
rule_runner.write_files(
{
"project/example.ext": "",
"project/BUILD": "mock_tgt(source='example.ext')"
}
)
result = rule_runner.run_goal_rule(Filedeps, args=["project/example.ext"])
assert result.stdout.splitlines() == ["project/BUILD", "project/example.ext"]
Unlike when testing normal @rules
, you do not need to define a QueryRule
when using rule_runner.run_goal_rule()
. This is already set up for you. However, you do need to make sure that your @goal_rule
and all the rules it depends on are registered with the RuleRunner
instance.
Approach 4: run_pants()
(integration tests for Pants)
pants_integration_test.py
provides functions that allow you to run a full Pants process as it would run on the command line. It's useful for acceptance testing and for testing things that are too difficult to test with Approach #3.
You will typically use three functions:
setup_tmpdir()
, which is a context manager that sets up temporary files in the build root to simulate a real project.- It takes a single parameter
files: Mapping[str, str]
, which is a dictionary of file paths to file content.- All file paths will be prefixed by the temporary directory.
- File content can include
{tmpdir}
, which will get substituted with the actual temporary directory.
- It yields the temporary directory, relative to the test's current work directory.
- It takes a single parameter
run_pants()
, which runs Pants using thelist[str]
of arguments you pass, such as["help"]
.- It returns a
PantsResult
object, which has the fieldsexit_code: int
,stdout: str
, andstderr: str
. - It accepts several other optional arguments, including
config
,extra_env
, and any keyword argument accepted bysubprocess.Popen()
.
- It returns a
PantsResult.assert_success()
orPantsResult.assert_failure()
, which checks the exit code and prints a nice error message if unexpected.
For example:
from pants.testutil.pants_integration_test import run_pants, setup_tmpdir
def test_build_ignore_dependency() -> None:
sources = {
"dir1/BUILD": "files(sources=[])",
"dir2/BUILD": "files(sources=[], dependencies=['{tmpdir}/dir1'])",
}
with setup_tmpdir(sources) as tmpdir:
ignore_result = run_pants(
[f"--build-ignore={tmpdir}/dir1", "dependencies", f"{tmpdir}/dir2"]
)
no_ignore_result = run_pants(["dependencies", f"{tmpdir}/dir2"])
ignore_result.assert_failure()
assert f"{tmpdir}/dir1" not in ignore_result.stderr
no_ignore_result.assert_success()
assert f"{tmpdir}/dir1" in no_ignore_result.stdout
run_pants()
is hermetic by default, meaning that it will not read your pants.toml
. As a result, you often need to include the option --backend-packages
in the arguments to run_pants()
. You can alternatively set the argument hermetic=False
, although we discourage this.
For example:
from pants.testutil.pants_integration_test import run_pants, setup_tmpdir
def test_getting_list_of_files_from_a_target() -> None:
sources = {
"dir/BUILD": "files(sources=['subdir/*.txt'])",
"dir/subdir/file1.txt": "",
"dir/subdir/file2.txt": "",
}
with setup_tmpdir(sources) as tmpdir:
result = run_pants(
[
"--backend-packages=['pants.backend.python']",
"filedeps",
f"{tmpdir}/dir:",
],
)
result.assert_success()
assert all(
filepath in result.stdout
for filepath in (
f"{tmpdir}/dir/subdir/file1.txt",
f"{tmpdir}/dir/subdir/file2.txt",
)
)
To read any files that were created, use get_buildroot()
as the first part of the path to ensure that the correct directory is read.
from pathlib import Path
from pants.base.build_environment import get_buildroot
from pants.testutil.pants_integration_test import run_pants, setup_tmpdir
def test_junit_report() -> None:
with setup_tmpdir(...) as tmpdir:
run_pants(["--coverage-py-reports=['json']", "test", ...]).assert_success()
coverage_report = Path(get_buildroot(), "dist", "coverage", "python", "report.json")
assert coverage_report.read_text() == "foo"