PyTest
pytest is a popular testing framework for Python that makes it easy to write simple as well as scalable test cases. It can be used to write various types of software tests including unit tests, integration tests, end-to-end tests, and functional tests.
Installation
pytest can be installed using pip as follows:
pip install -U pytestYou can alternatively install it as a global tool using uv as follows:
uv tool install pytestRunning Tests
You can run your tests by simply running the following command in your project directory:
pytest
# To get more detailed responses, use verbose option
pytest -vBy default, pytest will automatically:
- Discover test files.
- Run all test functions.
- Report results neatly in the terminal.
NOTE
The part of pytest execution where pytest finds which tests to run is called test discovery.
Example:
# content of test_sample.py
def func(x):
return x + 1
# This test will fail
def test_answer():
assert func(3) == 5Naming Conventions
| Entity | Convention | Example |
|---|---|---|
| Test file | test_*.py or *_test.py | test_math_utils.py |
| Test method or function | Must start with test_ | def test_addition(): |
| Test class | Must start with Test (Classes can't have __init__ method) | class TestDBConnector(): |
TIP
Add an empty __init__.py in all test folders to prevent module name collisions. This makes each directory a package, giving tests unique import paths.
Possible Outcomes
When you run tests, these are the possible outcomes:
| Symbol | Verbose Name | Description |
|---|---|---|
. | PASSED | The test ran successfully without any issues. |
F | FAILED | The test failed (assertion failure or unexpected exception). |
E | ERROR | An error occurred while running the test (not an assertion failure). |
s | SKIPPED | The test was skipped, usually via @pytest.mark.skip or @pytest.mark.skipif decorator. |
x | XFAIL (Expected Failure) | The test was expected to fail and did fail. |
X | XPASS (Unexpected Success) | The test was expected to fail but passed instead. With xfail(strict=True), this is treated as a failure, but the symbol remains X. |
Targeting Specific Tests
As your project grows, running the entire test suite every time can be slow and unnecessary. pytest gives you the flexibility to target specific tests.
- If you want to run only the tests inside a particular directory (e.g.,
tests/api/), use:
pytest tests/api/- If you want to run only the tests in a single test file/module, use:
pytest tests/api/test_auth.py- To target a single test function or a class within a file, specify the function or class name after
:::
# Targeting a function
pytest tests/api/test_auth.py::test_login_success
# Targeting a class
pytest tests/api/test_auth.py::TestClass- To target a single test method within a class, specify the class name and method name after
:::
pytest tests/api/test_auth.py::TestClass::test_bad_id- You can even mix and match directories, files, classes, methods and test functions in a single command:
pytest tests/api/ tests/db/test_db.py tests/api/test_auth.py::test_login_success- You can also target specific tests by using the
-kexpression filter or by selecting tests via markers. Both approaches are explained below.
Command-Line Options
pytest comes with a rich set of command-line options to customize test runs. The most commonly used ones are:
| Option | Description |
|---|---|
--collect-only | Only collect tests, don't execute them. |
-v, --verbose | Enables verbose mode, showing each test name and detailed output. |
-q, --quiet | Runs tests in quiet mode with minimal output (decrease verbosity). |
-x | Stops execution after the first failure. |
--maxfail=N | Stops after N test failures. |
--lf, --last-failed | Runs only tests that failed in the previous run (or all if none failed). |
-ff, --failed-first | Runs previously failed tests first, then the rest. |
-k EXPRESSION | Runs tests matching a given expression. |
-m MARKEXPR | Runs tests matching a specific marker expression. |
--tb=STYLE | Controls traceback style (auto, short, line, no, etc.). |
-l, --showlocals | Show local variables in tracebacks (disabled by default). |
-s | Shows print and log output in real time (disables output capture). |
--durations=N | Displays the N slowest tests after execution (N=0 for all). |
--setup-show | Display the setup and teardown of each fixture and test (useful for debugging fixtures). |
Examples
In pytest, you can use assert <expression> with any Python expression. If the expression evaluates to False, the test fails.
assert something
assert a == b
assert a <= bNOTE
In pytest output, > marks the line where the assertion failed, while E labels the error explanation lines.
Filter Tests
You can filter which tests to run using the -k option of the pytest command. The option accepts a keyword expression, which is used to match tests.
# This matches all tests that have `_raises` but not `delete` in their name
pytest -k `_raises and not delete`Mark Tests
pytest lets you put markers on tests, which means attaching labels to them so pytest can treat it differently. A test can have more than one marker, and a marker can be on multiple tests. Let's say you want to mark certain tests as "slow" so that you can skip them later, you can do so using @pytest.mark decorator as shown below:
@pytest.mark.slow
def test_big_computation():
...These custom markers must be declared in pytest.ini:
# pytest.ini
[pytest]
strict_markers = true
markers =
slow: marks tests as slow (deselect with '-m "not slow"')When the strict_markers configuration option is set, any unknown marks applied with the @pytest.mark.name_of_the_mark decorator will trigger an error.
pytest also includes a few helpful builtin markers such as skip, skipif, xfail etc.
Skip Tests
- The simplest way to skip a test function is to mark it with the
skipdecorator which may be passed an optionalreason:
@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown(): ...- If you want to skip a test imperatively during execution, you can also use
pytest.skip(reason)function:
def test_function():
if not valid_config():
pytest.skip("unsupported configuration")- If you wish to skip something conditionally then you can use
skipifinstead:
@pytest.mark.skipif(sys.version_info < (3, 13), reason="requires python3.13 or higher")
def test_function(): ...Expected Failures
- To mark a test function as expected to fail, use
xfailmarker.
@pytest.mark.xfail
def test_function(): ...- Similar to skipping tests, you can also mark a test as XFAIL imperatively during execution, using
pytest.xfail()function:
WARNING
Calling pytest.xfail() immediately marks the test as XFAIL and stops executing the rest of the test, because internally it works by raising a special exception. This is different from the xfail marker, which still runs the full test body.
def test_function():
if not valid_config():
pytest.xfail("failing configuration (but should work)")- You can also make the marker conditional by passing a
conditiontoxfail, along with areasonexplaining why it’s expected to fail:
@pytest.mark.xfail(sys.version_info < (3, 13), reason="not supported for Python versions < 3.13")
def test_function():
...Expected Exceptions
- To verify that a certain error is raised when the code runs, you can use
pytest.raises()as a context manager:
# This test fails if no exception is raised or if the raised exception is not ZeroDivisionError
def test_zero_division():
with pytest.raises(ZeroDivisionError):
# some code that might end up with something like 1/0- You can also get access to the actual exception message using something like:
def test_recursion_depth():
with pytest.raises(RuntimeError) as excinfo:
# Some code that will raise an exception
assert "maximum recursion" in str(excinfo.value)- Alternatively, you can pass a
matchkeyword parameter to match the exception message against a regular expression:
def test_recursion_depth():
with pytest.raises(RuntimeError, match=r".* recursion"):
# Some code that will raise an exceptionParametrized Testing
Parametrization lets you run the same test with multiple sets of inputs, avoiding repetition and making your tests easier to maintain.
NOTE
Even though parametrize is accessed via pytest.mark, it is not a marker. Instead, it is a helper function that generates multiple test cases and behaves more like a decorator factory than a simple marker label.
- The following example generates three separate test cases, one for each input:
@pytest.mark.parametrize("value", [0, 1, 2])
def test_is_non_negative(value):
assert value >= 0- You can also parametrize multiple arguments as shown below:
@pytest.mark.parametrize("x, y, result", [
(1, 2, 3),
(2, 3, 5),
(10, -1, 9),
])
def test_add(x, y, result):
assert x + y == result- You’ve already seen how to target individual tests. Similarly, you can also target a parametrized test case using its auto-generated identifier (or nodeid in pytest terminology):
# This targets the first test case in the example code above
pytest test_math.py::test_add[1-2-3]- If you don't want to use these auto-generated IDs and wish to provide your own ID values, you can do so using the
idsparameter:
@pytest.mark.parametrize(
"x, y, result",
[
(1, 2, 3),
(2, 3, 5),
(10, -1, 9),
],
ids=["test_one", "test_two", "test_three"]
)
def test_add(x, y, result):
assert x + y == result
# Now you can target tests like this - `pytest test_math.py::test_add[test_one]`- Alternatively, you can do the same using
pytest.paramfunction:
@pytest.mark.parametrize(
"x, y, result",
[
pytest.param(1, 2, 3, id="test_one"),
pytest.param(2, 3, 5, id="test_two"),
pytest.param(10, -1, 9, id="test_three"),
]
)
def test_add(x, y, result):
assert x + y == resultFixtures
A fixture in pytest is a function decorated with @pytest.fixture. It can do one or more of the following:
- provide data to tests,
- set up environment state or resources before a test runs,
- clean up after the test completes.
To use a fixture, simply include its name as a parameter in your test function (pytest matches the parameter name to the fixture and injects its return value):
import pytest
@pytest.fixture
def sample_data():
return {"id": 1, "username": "test_user"}
def test_data_validation(sample_data):
# The fixture return value is injected into 'sample_data'
assert sample_data["id"] == 1To handle teardown (cleanup code), use the yield keyword instead of return. Code before the yield is the setup; code after the yield is the teardown.
@pytest.fixture
def db_connection():
print("\n--- Connecting to DB ---") # Setup
conn = "DatabaseConnection"
yield conn
print("\n--- Closing DB ---") # Teardown
def test_query(db_connection):
assert db_connection == "DatabaseConnection"Sometimes a fixture is needed only for its side effects (e.g., initialize database, mock environment), not for its return value. In such cases, you can use @pytest.mark.usefixtures() to activate the fixture without adding it as a function argument.
@pytest.fixture
def setup_db():
print("\nSetting up DB")
yield
print("\nTearing down DB")
@pytest.mark.usefixtures("setup_db")
def test_something():
assert Trueusefixtures also works at the class level and affects all tests inside the class:
@pytest.mark.usefixtures("setup_db")
class TestDBOperations:
def test_insert(self):
...
def test_delete(self):
...Sharing Fixtures (conftest.py)
If you define fixtures in a test file, they are only available to tests in that file. To share fixtures across multiple test files, use a conftest.py file.
NOTE
You can and often will have multiple conftest.py files in a large pytest project. Each conftest.py acts as a local plugin for the directory it lives in, providing fixtures, hook implementations, and other test-specific configuration to that directory and its subdirectories.
- No Import Required: The fixtures and hooks defined in a
conftest.pyfile are automatically available to all tests in that directory and any of its subdirectories. They don't need to be imported. - Hierarchical: When a test runs, pytest looks for fixtures in the test file itself, then in the local
conftest.py, and then inconftest.pyfiles moving up the directory structure until it reaches the project root.
Scope of Fixtures
By default, a fixture is created and destroyed for every test function that uses it. However, for expensive operations (like spinning up a Docker container or connecting to a database), you can change the scope to cache the fixture instance.
Available Scopes:
function(Default): Run once per test function.class: Run once per test class.module: Run once per .py file.package: Run once per directory (Since pytest 7+,__init__.pyis not required).session: Run once per the entire test run.
# This will only run once for the entire test suite
@pytest.fixture(scope="session")
def expensive_resource():
resource = start_server()
yield resource
resource.shutdown()Extending Fixtures
Fixtures can use other fixtures. This allows you to build modular chains of dependencies.
Important Rule: A fixture can only request other fixtures that have the same or wider scope.
- ✅ A
functionscoped fixture can request asessionscoped fixture. - ❌ A
sessionscoped fixture cannot request afunctionscoped fixture.
@pytest.fixture(scope="session")
def db_engine():
return create_engine()
@pytest.fixture(scope="function")
def db_session(db_engine):
# Depending on the wider scoped 'db_engine'
session = db_engine.connect()
yield session
session.close()autouse
If you want a fixture to run automatically without explicitly requesting it in every test (or decorating every test), set autouse=True.
This is best used for global setup/teardown logic, such as configuring logging or clearing temporary files.
@pytest.fixture(autouse=True)
def clean_logs():
print("Cleaning logs before test...")
yield
print("Archiving logs after test...")
def test_something():
# clean_logs runs automatically here
assert TrueRenaming Fixtures
If a fixture name is too long or conflicts with a variable name, you can assign it a specific name using the name argument.
@pytest.fixture(name="user")
def create_complex_user_object_helper():
return "UserObject"
def test_user_login(user):
# We access the fixture using the alias "user"
assert user == "UserObject"Parametrizing Fixtures
Fixtures can be parametrized using params=[...]. The elements can be accessed using the special pytest's built-in request fixture.
@pytest.fixture(params=["postgres", "mysql", "sqlite"])
def db_driver(request):
return request.param
def test_database_connection(db_driver):
# This test runs 3 times, once for each param in the fixture
assert db_driver in ["postgres", "mysql", "sqlite"]While pytest.mark.parametrize is used to feed different inputs to a single test, Fixture Parametrization allows you to run all tests that use a specific fixture multiple times.
Builtin Fixtures
pytest provides several useful built-in fixtures. Below are some of the most commonly used ones, explained at a high level. For a complete list of all pytest built-in fixtures, see the official reference. If you want to explore any specific fixture in more depth, refer to its dedicated documentation page.
request
The request fixture is a special fixture that provides information about the requesting test function. It also exposes a param attribute when the fixture is parametrized, as shown in the example above.
Fixtures commonly use request to adjust their behavior dynamically based on the calling test.
# conftest.py
import pytest
@pytest.fixture
def get_info(request):
"""A fixture that uses 'request' to get the test's name."""
return f"Test called: {request.node.name}"
# test_example.py
def test_using_request(get_info):
# 'get_info' receives the name of this test function
assert "test_using_request" in get_info
print(get_info)tmp_path and tmp_path_factory
NOTE
tmp_path is a function-scoped fixture that internally uses tmp_path_factory. If you need a temporary directory for a broader scope, use tmp_path_factory instead.
The tmp_path fixture provides a unique temporary directory for each test. It’s useful for creating files or folders without affecting the real filesystem. The directory is automatically cleaned up after the test run.
tmp_path_factory works similarly but is intended for creating temporary directories at broader scopes (e.g., module or session level), allowing multiple tests to share the same temporary structure when needed.
# conftest.py
@pytest.fixture(scope="session")
def shared_file(tmp_path_factory):
path = tmp_path_factory.mktemp("data") / "sample.txt"
path.write_text("hello")
return path
# test_example.py
def test_read(shared_file):
assert shared_file.read_text() == "hello"NOTE
tmp_path and tmp_path_factory replace older tmpdir and tmpdir_factory fixtures, which uses the legacy py.path API. The newer fixtures use the modern pathlib.Path API and are recommended for all new tests.
pytestconfig
pytestconfig is a session-scoped fixture provides access to pytest’s global configuration object. It allows tests and fixtures to read command-line options, configuration values from pytest.ini, and other runtime settings. This is useful when test behavior needs to change based on how pytest was invoked.
def test_read_config(pytestconfig):
# Get a built-in or custom command-line option
verbose = pytestconfig.getoption("verbose")
# Read a value from pytest.ini (if defined)
cache_dir = pytestconfig.getini("cache_dir")
assert isinstance(verbose, int)
print(f"Verbose level: {verbose}, Cache dir: {cache_dir}")cache
The cache fixture provides a persistent key/value store to save and retrieve data across test runs. It exposes simple get and set methods for writing and reading values, and internally stores them using JSON serialization (json.dumps / json.loads).
All cached data lives under the .pytest_cache/ directory by default.
def test_cache_example(cache):
# store a value
cache.set("myplugin/last_result", {"count": 42})
# read it back
value = cache.get("myplugin/last_result")
assert value["count"] == 42Cache keys should follow a namespaced convention to avoid collisions. A common pattern is "plugin_or_app/key", such as myplugin/heavy_dataset_v1.
You can inspect cached values using pytest’s command-line options:
pytest --cache-show # show everything
pytest --cache-show myprefix/* # show values matching glob
pytest --cache-clear # clear the cache entirelyThe underlying cache storage object is accessed via pytestconfig.cache (or request.config.cache). This is how fixtures, especially session-scoped ones, interact with the persistent cache, because the cache fixture behaves like a function-scoped fixture while the cache object it exposes persists across the entire test session.
NOTE
The cache fixture behaves like a function-scoped fixture, but the underlying cache store (config.cache) is persistent across the entire test session and survives between pytest runs.
@pytest.fixture(scope="session")
def heavy_data(pytestconfig):
cache = pytestconfig.cache # same persistent store
...capsys
By default, pytest runs with output capturing enabled (to keep test results clean), meaning:
print()output does not appear on the terminal.- pytest captures both
stdoutandstderrinternally. - failed tests show the captured output in the traceback.
The capsys fixture lets you read captured output during a test:
def test_output_capture(capsys):
print("hello world")
out, err = capsys.readouterr()
assert out.strip() == "hello world"
assert err == ""You can temporarily disable capture for a block of code::
with capsys.disabled():
print("This will be shown immediately.")WARNING
When pytest is run with the -s option, all output capturing is disabled. As a result, capsys.readouterr() will always return empty strings for both stdout and stderr, and capsys.disabled() has no additional effect since capturing is already turned off.
monkeypatch
The monkeypatch fixture lets you temporarily modify (or "patch") the behavior of modules, classes, functions, environment variables , or the working directory for the duration of a single test. It is commonly used to replace functions, mock values, or adjust configuration without permanently altering your code or environment.
One of the most critical features of monkeypatch is its automatic teardown: Any changes made with it are automatically reverted after the test completes, regardless of whether the test passed or failed.
The monkeypatch fixture provides the following methods to modify objects, dictionaries, or environment variables:
setattr and delattr
setattr replaces an attribute on a target object. delattr removes an attribute from an object. If raising=True (default) and the attribute does not exist, an AttributeError is raised.
Syntax:
setattr(obj, name, value, raising=True)delattr(obj, name, raising=True)
def test_setattr(monkeypatch):
import myapp.utils
monkeypatch.setattr(myapp.utils, "get_user_id", lambda: 42)
assert myapp.utils.get_user_id() == 42
monkeypatch.delattr(myapp.utils, "get_user_id")
assert not hasattr(myapp.utils, "get_user_id")setitem and delitem
setitem updates a mapping by assigning a value to a key. delitem removes a key from a mapping. If raising=True (default) and the key does not exist, an KeyError is raised.
Syntax:
setitem(mapping, name, value)delitem(obj, name, raising=True)
def test_setitem_delitem(monkeypatch):
data = {}
monkeypatch.setitem(data, "k", "v")
assert data["k"] == "v"
monkeypatch.delitem(data, "k")
assert "k" not in datasetenv and delenv
setenv temporarily sets an environment variable. If prepend=True and the variable already exists, the new value is placed before the existing value, separated by os.pathsep (so it behaves well for PATH-like variables).
delenv deletes an environment variable. If raising=True (default) and the variable does not exist, an KeyError is raised.
Syntax:
setenv(name, value, prepend=None)delenv(name, raising=True)
def test_setenv_delenv(monkeypatch):
import os
monkeypatch.setenv("MODE", "test")
assert os.getenv("MODE") == "test"
monkeypatch.delenv("MODE")
assert os.getenv("MODE") is NoneOthers
syspath_prepend(path): Adds a path to the beginning ofsys.path, ensuring imports are resolved from that location first.chdir(path): Temporarily changes the current working directory for the duration of the test.context(): Context manager that returns a newMonkeyPatchobject which undoes any patching done inside thewithblock upon exit.
import functools
def test_partial(monkeypatch):
with monkeypatch.context() as m:
m.setattr(functools, "partial", 3)Plugins
pytest is designed to be highly extensible. While the core framework is powerful, its plugin system allows you to add new features or modify existing behavior to suit your specific testing needs.
A plugin is essentially Python code that implements one or more pytest "hooks" (functions named pytest_*) or provides additional fixtures, markers, or behavior. Plugins allow you to customize pytest without modifying its source code. They can:
- Add new fixtures (many built-in fixtures such as
tmp_pathandmonkeypatchcome from plugins). - Add new command-line options and modify test execution behavior.
- Modify test collection or filtering.
- Add custom reports or logging.
- Integrate with external tools (e.g., coverage, asyncio, benchmarking, Docker, Flask)
pytest maintains a rich ecosystem of community plugins. The complete list can be browsed at pytest plugin index.
Writing your own plugin
The simplest way to create your own plugin is to define a pytest "hook" function in your project's conftest.py file. Such plugins are known as local plugins, and they are ideal for project-specific customization.
# conftest.py
import pytest
# A simple hook implementation (plugin)
def pytest_addoption(parser):
parser.addoption("--env", action="store", default="dev", help="env to run tests against")
@pytest.fixture
def env_config(request):
return request.config.getoption("--env")This plugin adds a custom CLI option (--env) and exposes a fixture (env_config) that any test can consume.
Creating an installable plugin
If you want to reuse a plugin across multiple projects or publish it for others, you can package it as a standalone installable plugin. A standalone plugin is typically a single Python module or package, such as:
myplugin/
__init__.py
hooks.py
pyproject.tomlThe key step is to register your plugin as an entry point inside pyproject.toml:
[project]
name = "pytest-myplugin"
version = "0.1.0"
[project.entry-points."pytest11"]
myplugin = "myplugin.hooks"The pytest11 entry point group tells pytest to automatically load your plugin when installed.
Once packaged, users can enable it simply by installing it:
pip install pytest-mypluginTesting your plugin
pytest provides the pytester (or testdir) fixture to help you test plugins in isolation. It lets you create temporary test files, run pytest with your plugin active, and inspect results.
# conftest.py (in your plugin's test folder)
pytest_plugins = ["pytester"]
# test_myplugin.py
def test_custom_option(pytester):
# Create a test file that uses the plugin
pytester.makepyfile("""
def test_example(env_config):
assert env_config in ("dev", "prod")
""")
# Run pytest with a custom CLI option
result = pytester.runpytest("--env=prod")
result.assert_outcomes(passed=1)Distributing your plugin
To publish your plugin:
- Build your package:
python -m build- Upload it to PyPI:
twine upload dist/*- Include documentation:
- Installation instructions.
- Usage examples.
- List of provided hooks, fixtures, or options.
- Add the appropriate PyPI classifier so your plugin appears in the official pytest plugin index:
classifiers = ["Framework :: pytest"]- Users can then install your plugin with
pip.
Configuration
pytest allows you to customize its behavior through configuration files. These settings control test discovery, default command-line options, custom markers, plugins, and more. Configuration is typically defined in a pytest.ini file placed at the project root. You may also put pytest settings inside tox.ini if you are using tox (a tool that automates testing across multiple isolated virtual environments).
A sample pytest configuration file looks like this:
# pytest.ini
[pytest]
minversion = 6.0
addopts = -rsxX -l
testpaths = src/api_tests src/unit_tests
norecursedirs = .git venv build dist
python_files = test_*.py
python_classes = Test*
python_functions = test_*
xfail_strict = true
markers =
slow: marks tests as slow (deselect with -m "not slow")NOTE
A pytest project can contain many configuration files, but only one pytest.ini (or equivalent config file) is used per test run. Pytest stops searching after it finds the first configuration file (pytest.ini, tox.ini, or setup.cfg) relative to the invocation directory.
Here are the common pytest configuration settings:
| Setting | Purpose (What it does) |
|---|---|
| minversion | Enforces a minimum required version of pytest before running tests, preventing issues caused by older pytest features or behavior. |
| addopts | Defines default command-line options automatically applied to every pytest run, saving you from repeatedly typing common flags. |
| testpaths | Tells pytest which directories to scan for tests. Useful if your tests aren’t in the default locations or if you want to speed up collection by limiting search paths. |
| norecursedirs | Tells pytest which directories to ignore entirely during test discovery (e.g., build/, env/, .venv/, large data folders). |
| python_files | Defines the filename patterns pytest uses to identify test modules. Defaults include test_*.py or *_test.py. |
| python_classes | Defines the naming pattern for test classes. By default, pytest treats classes starting with Test as test containers. |
| python_functions | Defines the naming pattern pytest uses for test functions. By default, it detects functions beginning with test_. |
| markers | Registers custom markers (e.g., @pytest.mark.slow) to avoid "unknown marker" warnings and to document marker usage. |
| xfail_strict | Makes xfail strict. If enabled, tests marked xfail that unexpectedly pass will be treated as failures (xpass becomes an error). |
References
- Official PyTest Documentation
- "Python Testing with pytest" book by Brian Okken.
