Pytest assert error

The pytest framework makes it easy to write small tests, yet scales to support complex functional testing - pytest/assert.rst at main · pytest-dev/pytest

How to write and report assertions in tests

Asserting with the assert statement

pytest allows you to use the standard Python assert for verifying
expectations and values in Python tests. For example, you can write the
following:

# content of test_assert1.py
def f():
    return 3


def test_function():
    assert f() == 4

to assert that your function returns a certain value. If this assertion fails
you will see the return value of the function call:

$ pytest test_assert1.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 1 item

test_assert1.py F                                                    [100%]

================================= FAILURES =================================
______________________________ test_function _______________________________

    def test_function():
>       assert f() == 4
E       assert 3 == 4
E        +  where 3 = f()

test_assert1.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_assert1.py::test_function - assert 3 == 4
============================ 1 failed in 0.12s =============================

pytest has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary
operators. (See :ref:`tbreportdemo`). This allows you to use the
idiomatic python constructs without boilerplate code while not losing
introspection information.

However, if you specify a message with the assertion like this:

assert a % 2 == 0, "value was odd, should be even"

then no assertion introspection takes places at all and the message
will be simply shown in the traceback.

See :ref:`assert-details` for more information on assertion introspection.

Assertions about expected exceptions

In order to write assertions about raised exceptions, you can use
:func:`pytest.raises` as a context manager like this:

import pytest


def test_zero_division():
    with pytest.raises(ZeroDivisionError):
        1 / 0

and if you need to have access to the actual exception info you may use:

def test_recursion_depth():
    with pytest.raises(RuntimeError) as excinfo:

        def f():
            f()

        f()
    assert "maximum recursion" in str(excinfo.value)

excinfo is an :class:`~pytest.ExceptionInfo` instance, which is a wrapper around
the actual exception raised. The main attributes of interest are
.type, .value and .traceback.

You can pass a match keyword parameter to the context-manager to test
that a regular expression matches on the string representation of an exception
(similar to the TestCase.assertRaisesRegex method from unittest):

import pytest


def myfunc():
    raise ValueError("Exception 123 raised")


def test_match():
    with pytest.raises(ValueError, match=r".* 123 .*"):
        myfunc()

The regexp parameter of the match method is matched with the re.search
function, so in the above example match='123' would have worked as
well.

There’s an alternate form of the :func:`pytest.raises` function where you pass
a function that will be executed with the given *args and **kwargs and
assert that the given exception is raised:

pytest.raises(ExpectedException, func, *args, **kwargs)

The reporter will provide you with helpful output in case of failures such as no
exception
or wrong exception.

Note that it is also possible to specify a «raises» argument to
pytest.mark.xfail, which checks that the test is failing in a more
specific way than just having any exception raised:

@pytest.mark.xfail(raises=IndexError)
def test_f():
    f()

Using :func:`pytest.raises` is likely to be better for cases where you are
testing exceptions your own code is deliberately raising, whereas using
@pytest.mark.xfail with a check function is probably better for something
like documenting unfixed bugs (where the test describes what «should» happen)
or bugs in dependencies.

Assertions about expected warnings

You can check that code raises a particular warning using
:ref:`pytest.warns <warns>`.

Making use of context-sensitive comparisons

pytest has rich support for providing context-sensitive information
when it encounters comparisons. For example:

# content of test_assert2.py
def test_set_comparison():
    set1 = set("1308")
    set2 = set("8035")
    assert set1 == set2

if you run this module:

$ pytest test_assert2.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 1 item

test_assert2.py F                                                    [100%]

================================= FAILURES =================================
___________________________ test_set_comparison ____________________________

    def test_set_comparison():
        set1 = set("1308")
        set2 = set("8035")
>       assert set1 == set2
E       AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E         Extra items in the left set:
E         '1'
E         Extra items in the right set:
E         '5'
E         Use -v to get more diff

test_assert2.py:4: AssertionError
========================= short test summary info ==========================
FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'...
============================ 1 failed in 0.12s =============================

Special comparisons are done for a number of cases:

  • comparing long strings: a context diff is shown
  • comparing long sequences: first failing indices
  • comparing dicts: different entries

See the :ref:`reporting demo <tbreportdemo>` for many more examples.

Defining your own explanation for failed assertions

It is possible to add your own detailed explanations by implementing
the pytest_assertrepr_compare hook.

.. autofunction:: _pytest.hookspec.pytest_assertrepr_compare
   :noindex:

As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
file which provides an alternative explanation for Foo objects:

# content of conftest.py
from test_foocompare import Foo


def pytest_assertrepr_compare(op, left, right):
    if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
        return [
            "Comparing Foo instances:",
            f"   vals: {left.val} != {right.val}",
        ]

now, given this test module:

# content of test_foocompare.py
class Foo:
    def __init__(self, val):
        self.val = val

    def __eq__(self, other):
        return self.val == other.val


def test_compare():
    f1 = Foo(1)
    f2 = Foo(2)
    assert f1 == f2

you can run the test module and get the custom output defined in
the conftest file:

$ pytest -q test_foocompare.py
F                                                                    [100%]
================================= FAILURES =================================
_______________________________ test_compare _______________________________

    def test_compare():
        f1 = Foo(1)
        f2 = Foo(2)
>       assert f1 == f2
E       assert Comparing Foo instances:
E            vals: 1 != 2

test_foocompare.py:12: AssertionError
========================= short test summary info ==========================
FAILED test_foocompare.py::test_compare - assert Comparing Foo instances:
1 failed in 0.12s

Assertion introspection details

Reporting details about a failing assertion is achieved by rewriting assert
statements before they are run. Rewritten assert statements put introspection
information into the assertion failure message. pytest only rewrites test
modules directly discovered by its test collection process, so asserts in
supporting modules which are not themselves test modules will not be rewritten
.

You can manually enable assertion rewriting for an imported module by calling
:ref:`register_assert_rewrite <assertion-rewriting>`
before you import it (a good place to do that is in your root conftest.py).

For further information, Benjamin Peterson wrote up Behind the scenes of pytest’s new assertion rewriting.

Assertion rewriting caches files on disk

pytest will write back the rewritten modules to disk for caching. You can disable
this behavior (for example to avoid leaving stale .pyc files around in projects that
move files around a lot) by adding this to the top of your conftest.py file:

import sys

sys.dont_write_bytecode = True

Note that you still get the benefits of assertion introspection, the only change is that
the .pyc files won’t be cached on disk.

Additionally, rewriting will silently skip caching if it cannot write new .pyc files,
i.e. in a read-only filesystem or a zipfile.

Disabling assert rewriting

pytest rewrites test modules on import by using an import
hook to write new pyc files. Most of the time this works transparently.
However, if you are working with the import machinery yourself, the import hook may
interfere.

If this is the case you have two options:

  • Disable rewriting for a specific module by adding the string
    PYTEST_DONT_REWRITE to its docstring.
  • Disable rewriting for all modules by using --assert=plain.

TL;DR

Time is a precious resource so I won’t waste yours. Here’s how you can assert an exception is raised and how to check that in pytest.

Solution: Use pytest.raises

import pytest

def test_raises_exception():
    with pytest.raises(ZeroDivisionError):
        1 / 0

And here’s how you assert no exception is raised.

Solution: Enclose your code in a try/except block and and if the code raises, you can catch it and print a nice message. pytest is smart enough to make the test fail even if you don’t catch it but having a message makes your test cleaner.

def my_division_function(a, b):
    return a / b

def test_code_raises_no_exception():
    """
    Assert your python code raises no exception.    
    """
    try:
        my_division_function(10, 5)
    except ZeroDivisionError as exc:
        assert False, f"'10 / 5' raised an exception {exc}"

And that’s it, if you want to know more, please follow along.

Introduction

In this tutorial, you’ll learn how to use pytest to:

  • assert that an exception is raised
  • assert the exception message
  • assert the exception type
  • assert that an exception is not raised

In a nutshell, we’ll see how to use pytest.raises for each of those cases with examples.

Table of Contents

  1. How to Assert That an Exception Is Raised
  2. How to Assert That NO Exception Is Raised
  3. How to Assert the Exception Message — And Type
  4. Conclusion

How to Assert That an Exception Is Raised

In this section, I’m going to show you how you can assert that your code raises an exception. This is a frequent use case and can sometimes tricky. The wonderful thing is, if you are using pytest you can do that in an idiomatic and cleaner way.

Let’s imagine that we have a function that checks for some keys in a dictionary. If a key is not present, it should raise a KeyError. As you can see, this is very generic and doesn’t tell the users much about the error. We can make it cleaner by raising custom exceptions, with different messages depending on the field.

import pytest


class MissingCoordException(Exception):
    """Exception raised when X or Y is not present in the data."""


class MissingBothCoordException(Exception):
    """Exception raised when both X and Y are not present in the data."""


def sum_x_y(data: dict) -> str:
    return data["x"] + data["y"]

Now, time to test this. How can we do that with pytest?

This code is deliberately wrong, as you can see we’re not raising anything. In fact, we want to see test failing first, almost like TDD. After seeing the test failing, we can fix our implementation and re-run the test.

def test_sum_x_y_missing_both():
    data = {"irrelevant": 1}
    with pytest.raises(MissingBothCoordException):
        sum_x_y(data)

Then we get the following output:

============================ FAILURES ============================
________________ test_sum_x_y_missing_both _________________

    def test_sum_x_y_missing_both():
        data = {"irrelevant": 1}
        with pytest.raises(MissingBothCoordException):
>           sum_x_y(data)

test_example.py:33: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

data = {'irrelevant': 1}

    def sum_x_y(data: dict) -> str:
>       return data["x"] + data["y"]
E       KeyError: 'x'

test_example.py:27: KeyError
==================== short test summary info =====================
FAILED test_example.py::test_sum_x_y_missing_both - KeyEr...
======================= 1 failed in 0.02s ========================

Ok, this makes sense, now it’s time to fix it. We’ll check if the data dict has both x and y, otherwise we raise a MissingBothCoordException.

def sum_x_y(data: dict) -> str:
    if "x" not in data and "y" not in data:
        raise MissingBothCoordException("Both x and y coord missing.")
    return data["x"] + data["y"]

And when we re-run the test, it passes.

test_example.py .                                          [100%]

======================= 1 passed in 0.01s ========================

Great! And that is pretty much it. This is how you check if an exception is raised withpytest. In the next section, we’re going to improve our function and we’ll need another test.

How to Assert the Exception Message — And Type

In this section, we’ll improve our sum_x_y function and also the tests. I’ll show you how you can make your test more robust by checking the exception message.

With that in mind, let’s expand the sum_x_y function.

def sum_x_y(data: dict) -> str:
    if "x" not in data and "y" not in data and "extra" not in data:
        raise MissingBothCoordException("Both X and Y coord missing.")
    if "x" not in data:
        raise MissingCoordException("The Y coordinate is not present in the data.")
    if "y" not in data:
        raise MissingCoordException("The Y coordinate is not present in the data.")
    return data["x"] + data["y"]

The new test goes like this:

def test_sum_x_y_has_x_missing_coord():
    data = {"extra": 1, "y": 2}
    with pytest.raises(MissingCoordException):
        sum_x_y(data)

And it passes!

$ poetry run pytest -k test_sum_x_y_has_x_missing_coord
====================== test session starts =======================
collected 2 items / 1 deselected / 1 selected                    

test_example.py .                                          [100%]

================ 1 passed, 1 deselected in 0.01s =================

However, it’s a bit fragile… In case you haven’t noticed it, when "x" is missing, the exception message is: "The Y coordinate is not present in the data.". This is a bug, and one way to detect it is by asserting we return the right message. Thankfully, pytest makes it easier to do.

If we refactor the test to take into account the message, we get the following output:

def test_sum_x_y_has_x_missing_coord():
    data = {"extra": 1, "y": 2}
    with pytest.raises(MissingCoordException) as exc:
        sum_x_y(data)
    assert "The X coordinate is not present in the data." in str(exc.value)
============================ FAILURES ============================
_____________ test_sum_x_y_has_x_missing_coord _____________

def test_sum_x_y_has_x_missing_coord():
        data = {"extra": 1, "y": 2}
        with pytest.raises(MissingCoordException) as exc:
            sum_x_y(data)
>       assert "The X coordinate is not present in the data." in str(exc.value)
E       AssertionError: assert 'The X coordinate is not present in the data.' in 'The Y coordinate is not present in the data.'
E        +  where 'The Y coordinate is not present in the data.' = str(MissingCoordException('The Y coordinate is not present in the data.'))
E        +    where MissingCoordException('The Y coordinate is not present in the data.') = <ExceptionInfo MissingCoordException('The Y coordinate is not present in the data.') tblen=2>.value

test_example.py:32: AssertionError
==================== short test summary info =====================
FAILED test_example.py::test_sum_x_y_has_x_missing_coord
======================= 1 failed in 0.02s ========================

That’s exactly what we want. Let’s fix the code and re-run the test.

def sum_x_y(data: dict) -> str:
    if "x" not in data and "y" not in data and "extra" not in data:
        raise MissingBothCoordException("Both X and Y coord missing.")
    if "x" not in data:
        raise MissingCoordException("The X coordinate is not present in the data.")
    if "y" not in data:
        raise MissingCoordException("The Y coordinate is not present in the data.")
    return data["x"] + data["y"]

And the result…

$ poetry run pytest test_example.py::test_sum_x_y_has_x_missing_coord
====================== test session starts =======================
platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/miguel/projects/tutorials/pytest-raises
collected 1 item                                                 

test_example.py .                                          [100%]

======================= 1 passed in 0.01s ========================

This is possible because pytest.raises returns an ExceptionInfo object that contains fields such as type, value, traceback and many others. If we wanted to assert the type, we could do something along these lines…

def test_sum_x_y_has_x_missing_coord():
    data = {"extra": 1, "y": 2}
    with pytest.raises(MissingCoordException) as exc:
        sum_x_y(data)
    assert "The X coordinate is not present in the data." in str(exc.value)
    assert exc.type == MissingCoordException

However, we are already asserting that by using pytest.raises so I think asserting the type like this a bit redundant. When is this useful then? It’s useful if we are asserting a more generic exception in pytest.raises and we want to check the exact exception raised. For instance:

def test_sum_x_y_has_x_missing_coord():
    data = {"extra": 1, "y": 2}
    with pytest.raises(Exception) as exc:
        sum_x_y(data)
    assert "The X coordinate is not present in the data." in str(exc.value)
    assert exc.type == MissingCoordException

One more way to assert the message is by setting the match argument with the pattern you want to be asserted. The following example was taken from the official pytest docs.

>>> with raises(ValueError, match='must be 0 or None'):
...     raise ValueError("value must be 0 or None")

>>> with raises(ValueError, match=r'must be d+$'):
...     raise ValueError("value must be 42")

As you can see, we can verify if the expected exception is raised but also if the message matches the regex pattern.

How to Assert That NO Exception Is Raised

The last section in this tutorial is about yet another common use case: how to assert that no exception is thrown. One way we can do that is by using a try / except. If it raises an exception, we catch it and assert False.

def test_sum_x_y_works():
    data = {"extra": 1, "y": 2, "x": 1}

    try:
        sum_x_y(data)
    except Exception as exc:
        assert False, f"'sum_x_y' raised an exception {exc}"

When we run this test, it passes.

$ poetry run pytest test_example.py::test_sum_x_y_works
====================== test session starts =======================
collected 1 item                                                 

test_example.py .                                          [100%]

======================= 1 passed in 0.00s ========================

Now, let’s create a deliberate bug so we can see the test failing. We’ll change our function to raise an ValueError before returning the result.

def sum_x_y(data: dict) -> str:
    if "x" not in data and "y" not in data and "extra" not in data:
        raise MissingBothCoordException("'extra field and x / y coord missing.")
    if "x" not in data:
        raise MissingCoordException("The X coordinate is not present in the data.")
    if "y" not in data:
        raise MissingCoordException("The Y coordinate is not present in the data.")
    raise ValueError("Oh no, this shouldn't have happened.")
    return data["x"] + data["y"]

And then we re-run the test…

    def test_sum_x_y_works():
        data = {"extra": 1, "y": 2, "x": 1}

        try:
            sum_x_y(data)
        except Exception as exc:
>           assert False, f"'sum_x_y' raised an exception {exc}"
E           AssertionError: 'sum_x_y' raised an exception Oh no, this shouldn't have happened.
E           assert False

test_example.py:52: AssertionError
==================== short test summary info =====================
FAILED test_example.py::test_sum_x_y_works - AssertionErr...
======================= 1 failed in 0.02s ========================

It works! Our code raised the ValueError and the test failed!

Conclusion

That’s it for today, folks! I hope you’ve learned something new and useful. Knowing how to test exceptions is an important skill to have. The way pytest does that is, IMHO, cleaner than unittest and much less verbose. In this article, I showed how you can not only assert that your code raises the expected exception, but also assert when they’re not supposed to be raised. Finally, we saw how to check if the exception message is what you expect, which makes test cases more reliable.

Other posts you may like:

  • Learn how to unit test REST APIs in Python with Pytest by example.

  • 7 pytest Features and Plugins That Will Save You Tons of Time

  • How to Use Fixtures as Arguments in pytest.mark.parametrize

  • How to Check if an Exception Is Raised (or Not) With pytest

  • 7 pytest Plugins You Must Definitely Use

  • How to Disable Autouse Fixtures in pytest

  • How to Unit Test Complex Data Like Numpy Arrays in Python

  • The Best Way to Compare Two Dictionaries in Python

See you next time!

This post was originally published at https://miguendes.me

Kelvin Wangonya

First time I had someone review my pull requests, she was pretty strict on tests. I couldn’t merge if the tests were failing, of course. But I also couldn’t merge if coverage had decreased by even 1%. TDD was still new to me so maintaining coverage was a challenge since I was only testing the bare minimum I could. I had to find out how to make my tests more robust and ensure as much of my code was tested as possible. One area that I wasn’t really sure how to test was the custom exceptions I had written. Here’s an example:

# login.py

def check_email_format(email):
    """check that the entered email format is correct"""
    pass

def test_email_exception():
    """test that exception is raised for invalid emails"""
    pass

Enter fullscreen mode

Exit fullscreen mode

This is probably something you want to do if you’re implementing a system with email authentication. The example is oversimplified, but it serves the purpose of this post well.

To test for raised exceptions, pytest offers a handy method: pytest.raises. Let’s see how to use it in our example:

import re
import pytest

def check_email_format(email):
    """check that the entered email format is correct"""
    if not re.match(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+$)", email):
        raise Exception("Invalid email format")
    else:
        return "Email format is ok"

def test_email_exception():
    """test that exception is raised for invalid emails"""
    with pytest.raises(Exception):
        assert check_email_format("good@email.com")

Enter fullscreen mode

Exit fullscreen mode

The check_email_format method takes in an email and checks that it matches the regex pattern given. If it does, it returns "Email format is ok", otherwise, an exception is raised.

Using pytest.raises in a with block as a context manager, we can check that an exception is actually raised if an invalid email is given. Running the tests on the code as it is above should fail:

collected 1 item                                                                                                                                                                                       
login.py F                [100%]

==================== FAILURES ========================

    def test_email_exception():
        """test that exception is raised for invalid emails"""
        with pytest.raises(Exception):
>           assert check_email_format("good@email.com")
E           Failed: DID NOT RAISE <class 'Exception'>

login.py:16: Failed

Enter fullscreen mode

Exit fullscreen mode

Notice it says Failed: DID NOT RAISE <class 'Exception'>. If an exception is not raised, the test fails. I found this to be pretty awesome. We passed in a valid email format (according to our standards here) so the test works as expected. Now we can make it pass.

import re
import pytest

def check_email_format(email):
    """check that the entered email format is correct"""
    if not re.match(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+$)", email):
        raise Exception("Invalid email format")
    else:
        return "Email format is ok"

def test_email_exception():
    """test that exception is raised for invalid emails"""
    with pytest.raises(Exception):
        assert check_email_format("bademail.com") # invalid email format to raise exception

Enter fullscreen mode

Exit fullscreen mode

Run your test: pytest login.py:

collected 1 item                         

login.py .              [100%]

====================== 1 passed in 0.05 seconds ======================

Enter fullscreen mode

Exit fullscreen mode

You can also add an extra check for the exception message:

import re
import pytest

def check_email_format(email):
    """check that the entered email format is correct"""
    if not re.match(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+$)", email):
        raise Exception("Invalid email format")
    else:
        return "Email format is ok"

def test_email_exception():
    """test that exception is raised for invalid emails"""
    with pytest.raises(Exception) as e:
        assert check_email_format("bademail.com")
    assert str(e.value) == "Invalid email format"

Enter fullscreen mode

Exit fullscreen mode

gif

We want your help! Become a Tag Moderator.
Check out this survey and help us moderate our community by becoming a tag moderator here at DEV.

Asserting with the assert statement¶

pytest allows you to use the standard Python assert for verifying
expectations and values in Python tests. For example, you can write the
following:

# content of test_assert1.py
def f():
    return 3


def test_function():
    assert f() == 4

to assert that your function returns a certain value. If this assertion fails
you will see the return value of the function call:

$ pytest test_assert1.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 1 item

test_assert1.py F                                                    [100%]

================================= FAILURES =================================
______________________________ test_function _______________________________

    def test_function():
>       assert f() == 4
E       assert 3 == 4
E        +  where 3 = f()

test_assert1.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_assert1.py::test_function - assert 3 == 4
============================ 1 failed in 0.12s =============================

pytest has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary
operators. (See Demo of Python failure reports with pytest). This allows you to use the
idiomatic python constructs without boilerplate code while not losing
introspection information.

However, if you specify a message with the assertion like this:

assert a % 2 == 0, "value was odd, should be even"

then no assertion introspection takes places at all and the message
will be simply shown in the traceback.

See Assertion introspection details for more information on assertion introspection.

Assertions about expected exceptions¶

In order to write assertions about raised exceptions, you can use
pytest.raises() as a context manager like this:

import pytest


def test_zero_division():
    with pytest.raises(ZeroDivisionError):
        1 / 0

and if you need to have access to the actual exception info you may use:

def test_recursion_depth():
    with pytest.raises(RuntimeError) as excinfo:

        def f():
            f()

        f()
    assert "maximum recursion" in str(excinfo.value)

excinfo is an ExceptionInfo instance, which is a wrapper around
the actual exception raised. The main attributes of interest are
.type, .value and .traceback.

You can pass a match keyword parameter to the context-manager to test
that a regular expression matches on the string representation of an exception
(similar to the TestCase.assertRaisesRegex method from unittest):

import pytest


def myfunc():
    raise ValueError("Exception 123 raised")


def test_match():
    with pytest.raises(ValueError, match=r".* 123 .*"):
        myfunc()

The regexp parameter of the match method is matched with the re.search
function, so in the above example match='123' would have worked as
well.

There’s an alternate form of the pytest.raises() function where you pass
a function that will be executed with the given *args and **kwargs and
assert that the given exception is raised:

pytest.raises(ExpectedException, func, *args, **kwargs)

The reporter will provide you with helpful output in case of failures such as no
exception
or wrong exception.

Note that it is also possible to specify a “raises” argument to
pytest.mark.xfail, which checks that the test is failing in a more
specific way than just having any exception raised:

@pytest.mark.xfail(raises=IndexError)
def test_f():
    f()

Using pytest.raises() is likely to be better for cases where you are
testing exceptions your own code is deliberately raising, whereas using
@pytest.mark.xfail with a check function is probably better for something
like documenting unfixed bugs (where the test describes what “should” happen)
or bugs in dependencies.

Assertions about expected warnings¶

You can check that code raises a particular warning using
pytest.warns.

Making use of context-sensitive comparisons¶

pytest has rich support for providing context-sensitive information
when it encounters comparisons. For example:

# content of test_assert2.py
def test_set_comparison():
    set1 = set("1308")
    set2 = set("8035")
    assert set1 == set2

if you run this module:

$ pytest test_assert2.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 1 item

test_assert2.py F                                                    [100%]

================================= FAILURES =================================
___________________________ test_set_comparison ____________________________

    def test_set_comparison():
        set1 = set("1308")
        set2 = set("8035")
>       assert set1 == set2
E       AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E         Extra items in the left set:
E         '1'
E         Extra items in the right set:
E         '5'
E         Use -v to get more diff

test_assert2.py:4: AssertionError
========================= short test summary info ==========================
FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'...
============================ 1 failed in 0.12s =============================

Special comparisons are done for a number of cases:

  • comparing long strings: a context diff is shown

  • comparing long sequences: first failing indices

  • comparing dicts: different entries

See the reporting demo for many more examples.

Defining your own explanation for failed assertions¶

It is possible to add your own detailed explanations by implementing
the pytest_assertrepr_compare hook.

pytest_assertrepr_compare(config, op, left, right)[source]

Return explanation for comparisons in failing assert expressions.

Return None for no custom explanation, otherwise return a list
of strings. The strings will be joined by newlines but any newlines
in a string will be escaped. Note that all but the first line will
be indented slightly, the intention is for the first line to be a summary.

Parameters:
  • config (Config) – The pytest config object.

  • op (str) – The operator, e.g. "==", "!=", "not in".

  • left (object) – The left operand.

  • right (object) – The right operand.

As an example consider adding the following hook in a conftest.py
file which provides an alternative explanation for Foo objects:

# content of conftest.py
from test_foocompare import Foo


def pytest_assertrepr_compare(op, left, right):
    if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
        return [
            "Comparing Foo instances:",
            f"   vals: {left.val} != {right.val}",
        ]

now, given this test module:

# content of test_foocompare.py
class Foo:
    def __init__(self, val):
        self.val = val

    def __eq__(self, other):
        return self.val == other.val


def test_compare():
    f1 = Foo(1)
    f2 = Foo(2)
    assert f1 == f2

you can run the test module and get the custom output defined in
the conftest file:

$ pytest -q test_foocompare.py
F                                                                    [100%]
================================= FAILURES =================================
_______________________________ test_compare _______________________________

    def test_compare():
        f1 = Foo(1)
        f2 = Foo(2)
>       assert f1 == f2
E       assert Comparing Foo instances:
E            vals: 1 != 2

test_foocompare.py:12: AssertionError
========================= short test summary info ==========================
FAILED test_foocompare.py::test_compare - assert Comparing Foo instances:
1 failed in 0.12s

Assertion introspection details¶

Reporting details about a failing assertion is achieved by rewriting assert
statements before they are run. Rewritten assert statements put introspection
information into the assertion failure message. pytest only rewrites test
modules directly discovered by its test collection process, so asserts in
supporting modules which are not themselves test modules will not be rewritten
.

You can manually enable assertion rewriting for an imported module by calling
register_assert_rewrite
before you import it (a good place to do that is in your root conftest.py).

For further information, Benjamin Peterson wrote up Behind the scenes of pytest’s new assertion rewriting.

Assertion rewriting caches files on disk¶

pytest will write back the rewritten modules to disk for caching. You can disable
this behavior (for example to avoid leaving stale .pyc files around in projects that
move files around a lot) by adding this to the top of your conftest.py file:

import sys

sys.dont_write_bytecode = True

Note that you still get the benefits of assertion introspection, the only change is that
the .pyc files won’t be cached on disk.

Additionally, rewriting will silently skip caching if it cannot write new .pyc files,
i.e. in a read-only filesystem or a zipfile.

Disabling assert rewriting¶

pytest rewrites test modules on import by using an import
hook to write new pyc files. Most of the time this works transparently.
However, if you are working with the import machinery yourself, the import hook may
interfere.

If this is the case you have two options:

  • Disable rewriting for a specific module by adding the string
    PYTEST_DONT_REWRITE to its docstring.

  • Disable rewriting for all modules by using --assert=plain.

I got some feedback related to Bite 243 recently. Since that’s a testing bite, it means working with pytest and specifically checking for exceptions with pytest.raises(). The comment got me to look at this handy feature of pytest with fresh eyes, and it seemed like a trip worth sharing!

pytest.raises() as a Context Manager

We can uses pytest.raises() to assert that a block of code raises a specific exception. Have a look at this sample from the pytest documentation:

def test_recursion_depth():
    with pytest.raises(RuntimeError) as excinfo:

        def f():
            f()

        f()
    assert "maximum recursion" in str(excinfo.value)

Is that test reasonably clear? I think so. But see how that assert is outside the with block? The first time I saw that sort of assertion, it felt odd to me. After all, my first exposure to the with statement was opening files:

with open('my_delicious_file.txt') as f:
    data = f.read()

When we get comfortable using open() in a with block like that, we pick up some lessons about context manager behavior. Context managers are good! They handle runtime context like opening and closing a file for us, sweeping details under the rug as any respectable abstraction should. As long as we only touch f inside that with block, our lives are long and happy. We probably don’t try to access f outside the block, and if we do things go awry since the file is closed. f is effectively dead to us once we leave that block.

I didn’t realize how much I had internalized that subtle lesson until the first time I saw examples of pytest.raises. It felt wrong to use excinfo after the with block, but when you think about it, that’s the only way it can work. We’re testing for an exception after all – once an exception happens we get booted out of that block. The pytest docs explain this well in a note here:

Note

When using pytest.raises as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:

>>> value = 15
>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...     assert exc_info.type is ValueError  # this will not execute

Instead, the following approach must be taken (note the difference in scope):

>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...
>>> assert exc_info.type is ValueError

Under the Covers

What I didn’t think about until recently is how the open()-style context manager and the pytest.raises() style are mirror-world opposites:

open(‘file.txt’) as f pytest.raises(ValueError) as excinfo
inside with f is useful excinfo is present but useless (empty placeholder)
outside with f is present but useless (file closed) excinfo has exception details

How does this work under the covers? As the Python documentation notes, entering a with block invokes a context manager’s __enter__ method and leaving it invokes __exit__. Check out what happens when the context manager gets created, and what happens inside __enter__:

def __init__(
    self,
    expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]],
    message: str,
    match_expr: Optional[Union[str, "Pattern"]] = None,
) -> None:
    ... snip ...
    self.excinfo = None  # type: Optional[_pytest._code.ExceptionInfo[_E]]

def __enter__(self) -> _pytest._code.ExceptionInfo[_E]:
    self.excinfo = _pytest._code.ExceptionInfo.for_later()
    return self.excinfo

So that excinfo attribute starts empty — good, there’s no exception yet! But in a nod to clarity, it gets a placeholder ExceptionInfo value thanks to a for_later() method! Explicit is better than implicit indeed!

So what happens later when we leave the with block?

    def __exit__(
        self,
        exc_type: Optional["Type[BaseException]"],
        exc_val: Optional[BaseException],
        exc_tb: Optional[TracebackType],
    ) -> bool:
        ... snip ...
        exc_info = cast(
            Tuple["Type[_E]", _E, TracebackType], (exc_type, exc_val, exc_tb)
        )
        self.excinfo.fill_unfilled(exc_info)
        ... snip ...

Pytest checks for the presence and type of an exception, and then it delivers on its for_later() promise by filling in self.excinfo.

A Summary in Three Parts

With all that background out of the way, we can see the three-act play of excinfo‘s life — from nothing, to empty, to filled:

def __init__(...):
    self.excinfo = None  # type: Optional[_pytest._code.ExceptionInfo[_E]]

def __enter__(...):
    self.excinfo = _pytest._code.ExceptionInfo.for_later()
    return self.excinfo

def __exit__(...):
    self.excinfo.fill_unfilled(exc_info)

Which shows up in our test code as:

with pytest.raises(RuntimeError) as excinfo:  # excinfo: None
    # excinfo: Empty
    def f():
        f()

    f()
# excinfo: Filled
assert "maximum recursion" in str(excinfo.value)

And that’s a beautiful thing!

— AJ

References

With Statement Context Managers (python docs)
pytest.raises (pytest docs)
Assertions about excepted exceptions (pytest docs)
PEP 343 — The «with» statement

Want a career as a Python Developer but not sure where to start?

One of the advantages of Pytest over the unittest module is that we don’t need to use different
assert methods on different data structures. Pytest, by way of magic (also known as introspection)
can infere the actual value, the expected value, and the operation used in a plain old assert statement and can
provide a rather nice error message.

Let’s see a few of those error messages:

In these examples I’ll keep both the code under test and the testing function in the same file. You’ve already seen how it would look normally if we imported the functions under test from another module. If not check out the getting started with pytest article.

Also, in order to make the results clear, I’ve removed the summary of the test runs and kept only the actual
error reporting.

Comparing numbers for equality in Pytest

Probably the most basic thing to test is whether a function given some input returns an expected number.

examples/python/pt3/test_number_equal.py

def double(n):
    #return 2*n
    return 2+n

def test_string_equal():
    assert double(2) == 4
    assert double(21) == 42

In the above function double someone has mistakenly used + instead of *. The result of the test looks like this

$ pytest test_number_equal.py

    def test_string_equal():
        assert double(2) == 4
>       assert double(21) == 42
E       assert 23 == 42
E        +  where 23 = double(21)

The line starting with the > sign indicates the assert line that failed. The lines starting with E are the details.

Compare numbers relatively

In certain cases we cannot test for equality. For example if we would like to test if some process finishes within a given time, or whether a timeout is triggered at the right time. In such cases we need to compare if a number is less-than or greater-than some other number.

examples/python/pt3/test_number_less_than.py

def get_number():
    return 23

def test_string_equal():
    assert get_number() < 0 

Running the test will provide the following error message:

$ pytest test_number_less_than.py

    def test_string_equal():
>       assert get_number() < 0
E       assert 23 < 0
E        +  where 23 = get_number()

The error-report looks quite similar to what we had above, but in this case too it is clear what was the comparision operation that failed.

Comparing strings

Similar to numbers we might want to know if a string received from some function is the same as we expect it to be.

examples/python/pt3/test_string_equal.py

def get_string():
    return "abc"

def test_string_equal():
    assert get_string() == "abd"

The result looks familiar:

$ pytest test_string_equal.py

    def test_string_equal():
>       assert get_string() == "abd"
E       AssertionError: assert 'abc' == 'abd'
E         - abc
E         + abd

For such short strings seeing both the expected string and the actual string is ok.
We can look at the strings and compare them character by character to see what was the actual difference.

Compare long strings

If the strings are much longer however, it would be really hard for us to pinpoint the specific location of the character (or characters) that differ. Luckily the authors of Pytest have thought about this problem as well:

examples/python/pt3/test_long_strings.py

import string

def get_string(s):
    return string.printable + s + string.printable

def test_long_strings():
    assert get_string('a') == get_string('b')

string.printable is a string containing all the printable ASCII characters:
0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!»#$%&'()*+,-./:;<=>?@[\]^_`{|}~ tnrx0bx0c

Our brilliant get_string function will return it twice with an additional character between them.
We use this to create two nasty and long strings that differ by a single character.

The output looks like this:

$ pytest test_long_strings.py

    def test_long_strings():
>       assert get_string('a') == get_string('b')
E       AssertionError: assert '0123456789ab...tnrx0bx0c' == '0123456789abc...tnrx0bx0c'
E         Skipping 90 identical leading characters in diff, use -v to show
E         Skipping 91 identical trailing characters in diff, use -v to show
E           {|}~
E
E         - a012345678
E         ? ^
E         + b012345678
E         ? ^

I think this explains quite nicely where have the two strings differ and if you really, really want to see the
whole string you can use the -v flag.

Is string in longer string

If you need to check whether a string is part of a larger string we can use the regular in operator.

examples/python/pt3/test_substring.py

import string

def get_string():
    return string.printable * 30

def test_long_strings():
    assert 'hello' in get_string()

In case of failure the result will include only the beginning and the end of the «long string».
This can be very usefule if you need to test whether a certain string appears or not in an HTML page.

examples/python/pt3/test_substring.txt

    def test_long_strings():
>       assert 'hello' in get_string()
E       assert 'hello' in '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ tnrx0bx0c012345...x0bx0c0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ tnrx0bx0c'
E        +  where '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ tnrx0bx0c012345...x0bx0c0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ tnrx0bx0c' = get_string()

Testing any expression

Instead of calling a function we might have an expression on one side of the equation. (Actually I am not sure how often this would happen in the real world.
Maybe we only see these in examples on how pytest works.)

examples/python/pt3/test_expression_equal.py

def test_expression_equal():
    a = 3
    assert a % 2 == 0

The test result:

$ pytest test_expression_equal.py

    def test_expression_equal():
        a = 3
>       assert a % 2 == 0
E       assert (3 % 2) == 0

Is element in a list?

Besides comparing individual values we might also want to compare more complex data. First, let’s see what happens if our test must ensure that a value can be found in a list?

examples/python/pt3/test_in_list.py

def get_list():
    return ["monkey", "cat"]

def test_in_list():
    assert "dog" in get_list()

We can use the in operator of Python. The result will look like this:

$ pytest test_in_list.py

    def test_in_list():
>       assert "dog" in get_list()
E       AssertionError: assert 'dog' in ['monkey', 'cat']
E        +  where ['monkey', 'cat'] = get_list()

Pytest will conveniently show us the list that did not contain the expected value.

Compare lists in Pytest

A more interesting case might be testing if the returned list is the same as the expected list. Using the == operator can tell us if the two lists are equal or not, but if we need to understand what went wrong, we’d better know where do the lists differ.
Or at least where do they start to differ.

examples/python/pt3/test_lists.py

import string
import re

def get_list(s):
    return list(string.printable + s + string.printable)

def test_long_lists():
    assert get_list('a') == get_list('b')

The result:

$ pytest test_lists.py

    def test_long_lists():
>       assert get_list('a') == get_list('b')
E       AssertionError: assert ['0', '1', '2...'4', '5', ...] == ['0', '1', '2'...'4', '5', ...]
E         At index 100 diff: 'a' != 'b'
E         Use -v to get the full diff

We could further explore the output for the cases when multiple elements differ and when one list is a sublist of the other.

Compare dictionaries in Pytest

Dictionaries can differ in a number of ways. The keys might be identical, but some values might differ.
Some keys might be missing in the actual result or there might be some extra keys.

In this example we test all of these:

Using the string.printable we create a dictionary where the keys are the printable characters and the values
are their respective ASCII value returned by the ord function. Then we add (or replace) one of key-value pair.

examples/python/pt3/test_dictionaries.py

import string
import re

def get_dictionary(k, v):
    d = dict([x, ord(x)] for x in  string.printable)
    d[k] = v
    return d

def test_big_dictionary_different_value():
    assert get_dictionary('a', 'def') == get_dictionary('a', 'abc')

def test_big_dictionary_differnt_keys():
    assert get_dictionary('abc', 1) == get_dictionary('def', 2)

The result looks like this:

$ pytest test_dictionaries.py

______________ test_big_dictionary_different_value _______________

    def test_big_dictionary_different_value():
>       assert get_dictionary('a', 'def') == get_dictionary('a', 'abc')
E       AssertionError: assert {'t': 9, 'n...x0c': 12, ...} == {'t': 9, 'n'...x0c': 12, ...}
E         Omitting 99 identical items, use -v to show
E         Differing items:
E         {'a': 'def'} != {'a': 'abc'}
E         Use -v to get the full diff

_______________ test_big_dictionary_differnt_keys ________________

    def test_big_dictionary_differnt_keys():
>       assert get_dictionary('abc', 1) == get_dictionary('def', 2)
E       AssertionError: assert {'t': 9, 'n...x0c': 12, ...} == {'t': 9, 'n'...x0c': 12, ...}
E         Omitting 100 identical items, use -v to show
E         Left contains more items:
E         {'abc': 1}
E         Right contains more items:
E         {'def': 2}
E         Use -v to get the full diff

The first test function got two dictionaries where the value of a single key differed.

The second test function had an extra key in both dictionaries.

Testing for expected exceptions in Pytest

Finally let’s look at exceptions!

A good test suite will test the expected behaviour both when the input is fine and
also when the input triggers some exception. Without testing the exception we cannot
be sure that they will be really raiesed when necessary. An incorrect refactoring
might eliminate the error checking of our code therby letting through invalid data
and either triggering a different exception as in our example, or not generating any exception
just silently doing the wrong thing.

In this brilliant example the divide function checks if the divider is 0 and raises it
own type of exception instead of letting Python rais its own. If this is the defined behavior
someone using our module will probably wrap our code in some try expression and expect
a ValueError error. If someone changes our divide function and removed our
special exception then we basically have broken the exception-handling of our user.

examples/python/pt3/test_exceptions.py

import pytest

def divide(a, b):
    if b == 0:
        raise ValueError('Cannot divide by Zero')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

The test ha actually two parts. The first part:

    with pytest.raises(ValueError) as e:
        divide(1, 0)

checks if a ValueError was raised during our call to divide(1, 0)
and will assign the exception object to the arbitrarily named variable e.

The second part is a plain assert that checks if the text of the exception is what
we expect to be.

This is now the expected behaviour. Our test passes:

$ pytest test_exceptions.py

test_exceptions.py .

What if someone changes the error message in our exception from Zero to Null?

examples/python/pt3/test_exceptions_text_changed.py

import pytest

def divide(a, b):
    if b == 0:
        raise ValueError('Cannot divide by Null')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

The assert in the test will fail indicating the change in the text.
This is actually a plain string comparision.

$ pytest test_exceptions_text_changed.py


    def test_zero_division():
        with pytest.raises(ValueError) as e:
            divide(1, 0)
>       assert str(e.value) == 'Cannot divide by Zero'
E       AssertionError: assert 'Cannot divide by Null' == 'Cannot divide by Zero'
E         - Cannot divide by Null
E         ?                  ^^^^
E         + Cannot divide by Zero
E         ?                  ^^^^

In the second example we show the case when the special exception raising is gone.
Either by mistake or because someone decided that it should not be there.
In this case the first part of our test function will catch the different exception.

examples/python/pt3/test_exceptions_failing.py

import pytest

def divide(a, b):
#    if b == 0:
#        raise ValueError('Cannot divide by Zero')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

The report will look like this:

$ pytest test_exceptions_failing.py

    def test_zero_division():
        with pytest.raises(ValueError) as e:
>           divide(1, 0)

test_exceptions_failing.py:10:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

a = 1, b = 0

    def divide(a, b):
    #    if b == 0:
    #        raise ValueError('Cannot divide by Zero')
>       return a / b
E       ZeroDivisionError: division by zero

Exception depositing money to the bank

Another case when checking for proper exceptions might be important is when
we want to avoid silently incorrect behavior.

For example in this code we have a function called deposit that expects
a non-negative number. We added our input validation that will raise an exception protecting
the balance of our bank account. (In our example we only indicated the location of the code
that actually changes the balance.)

examples/python/pt3/test_bank.py

import pytest

def deposit(money):
    if money < 0:
        raise ValueError('Cannot deposit negative sum')

    # balance += money

def test_negative_deposit():
    with pytest.raises(ValueError) as e:
        deposit(-1)
    assert str(e.value) == 'Cannot deposit negative sum' 

We have also created a test-case that will ensure that the protection is there,
or at least that the function raises an exception if -1 was passed to it.

Conclusion

Pytest and its automatic error reporting is awesome.

Понравилась статья? Поделить с друзьями:
  • Pxe e23 client received tftp error from server
  • Pyqt5 qmessagebox error
  • Pxe boot aborted как исправить
  • Pyqt5 error message
  • Pyqt messagebox error