https://docs.pytest.org/
https://www.python-course.eu/python3_pytest.php
https://realpython.com/pytest-python-testing/
Effective Python Testing With Pytest
https://stackoverflow.com/questions/41304311/running-pytest-test-functions-inside-a-jupyter-notebook
https://semaphoreci.com/blog/test-jupyter-notebooks-with-pytest-and-nbmake
https://docs.next.tech/creator/how-tos/testing-techniques/testing-jupyter-notebook-code-with-pytest
PIP (in a virtual environment) python -m pip install pytest APT Debian packages: python-pytest, python3-pytest
The 'pytest' (Py3.6+) framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.
Names for test files (spam.py is a tested module): test_spam.py or spam_test.py.
# Content of the file test_sample.py import pytest def inc(x): return x + 1 def test_answer(): # a function name with 'test' # Do not use docstrings in test cases. assert inc(3) == 5 # intentional error, 'assert' is used assert pytest.approx(inc(10.1), abs=0.1) == 11.0 assert pytest.approx(inc(10.1), rel=0.01) == 11.0 # default rel=1e-6 # ordered sequences of numbers can be compared assert (0.1 + 0.2, 0.2 + 0.4) == pytest.approx((0.3, 0.6)) # dictionary values can be compared assert {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == pytest.approx({'a': 0.3, 'b': 0.6})
# To execute tests: $ pytest # discovery mode ============================= test session starts ============================== platform linux2 -- Python 2.7.16, pytest-3.10.1, py-1.7.0, pluggy-0.8.0 rootdir: .../week12/pytest1, inifile: collected 1 item test_sample.py F [100%] =================================== FAILURES =================================== _________________________________ test_answer __________________________________ def test_answer(): > assert inc(3) == 5 E assert 4 == 5 E + where 4 = inc(3) test_sample.py:8: AssertionError =========================== 1 failed in 0.02 seconds ===========================
# Errors were corrected. $ pytest # discovery mode ============================= test session starts ============================== platform linux2 -- Python 2.7.16, pytest-3.10.1, py-1.7.0, pluggy-0.8.0 rootdir: .../week12/pytest1, inifile: collected 1 item test_sample.py . [100%] =========================== 1 passed in 0.01 seconds =========================== # In Debian 10, 'python3 -m pytest' will use Python 3.7.
The 'pytest' report shows:
(1) the system state,
(2) the directory to search under for configuration and tests,
(3) the number of tests the runner discovered.
The output then indicates the status of each test using a syntax
similar to 'unittest':
(1) a dot (.) means that the test passed,
(2) an 'F' means that the test has failed,
(3) an 'E' means that the test raised an unexpected exception.
# Content of the file test_set.py import pytest def test_union(): setA = set([1, 3, 5]) setB = set([2, 3]) assert setA | setB == set([1, 2, 3, 5]) assert setA.union(setB) == set([1, 2, 3, 5]) def test_intersection(): setA = set([1, 3, 5]) setB = set([2, 3]) assert setA & setB == set([3]) assert setA.intersection(setB) == set([3]) def test_difference(): setA = set([1, 3, 5]) setB = set([2, 3]) assert setA - setB == set([1, 5]) assert setA.difference(setB) == set([1, 5]) if __name__ == "__main__": pytest.main()
# To execute tests: $ python test_set.py # using pytest.main()
You can think of a test as being broken down into four steps:
(1) Arrange - preparing everything for tests.
(2) Act - the action that kicks off
the 'behavior' we want to test.
(3) Assert - checking the results.
(4) Cleanup.
'Fixtures' are for the 'Arrange' step.
@pytest.fixture # decorator def setA(): return set([1, 3, 5]) @pytest.fixture def setB(): return set([2, 3]) def test_union(setA, setB): assert setA | setB == set([1, 2, 3, 5]) assert setA.union(setB) == set([1, 2, 3, 5]) def test_intersection(setA, setB): assert setA & setB == set([3]) assert setA.intersection(setB) == set([3]) def test_difference(setA, setB): assert setA - setB == set([1, 5]) assert setA.difference(setB) == set([1, 5])
# Testing exceptions. def test_exceptions(): pytest.raises(TypeError, len, 10) pytest.raises(TypeError, abs, "word") pytest.raises(ZeroDivisionError, lambda: 1/0) pytest.raises(IndexError, lambda: list()[1]) pytest.raises(KeyError, lambda: dict()["key"])
# In order to write assertions about raised exceptions, # we can use pytest.raises() as a context manager. def test_zero_division(): with pytest.raises(ZeroDivisionError): 1 / 0 def test_recursion_depth(): with pytest.raises(RuntimeError) as excinfo: def f(): f() f() assert "maximum recursion" in str(excinfo.value) # excinfo is an ExceptionInfo instance, which is a wrapper around # the actual exception raised. The main attributes of interest are # excinfo.type, excinfo.value, and excinfo.traceback.
# Testing the exception type and the error message with 'match'. def test_recursion_depth(): with pytest.raises(RuntimeError, match="maximum recursion"): def f(): f() f() def test_values(): with pytest.raises(ValueError, match='must be 0 or None'): raise ValueError('value must be 0 or None') with pytest.raises(ValueError, match=r'must be \d+$'): # regular expression raise ValueError('value must be 42')
Test functions can be marked by decorating them with 'pytest.mark'. It is possible to create custom markers (markers have to be registered).
$ pytest --markers # list of existing markers @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test. ...
# Exemplary markers @pytest.mark.skip("no way of currently testing this") @pytest.mark.skipif('sys.platform == "win32"') @pytest.mark.xfail() @pytest.mark.xfail(raises=IndexError) # why the test is failing @pytest.mark.parametrize(argnames, argvalues) @pytest.mark.tryfirst @pytest.mark.trylast
# Content of the file fib2.py def fibonacci(n): old, new = 0, 1 for _ in range(n): old, new = new, old + new return old
# Content of the file test_fib2.py import pytest from fib2 import fibonacci @pytest.mark.parametrize('n, res', [(0, 0), (1, 1), (2, 1), (3, 2), (4, 3), (5, 5), (6, 8)]) def test_fibonacci(n, res): assert fibonacci(n) == res
Grouping tests in classes can be beneficial for the following reasons:
(1) test organization,
(2) sharing fixtures for tests only in that particular class,
(3) applying marks at the class level and having them implicitly
apply to all tests.
Each test has a unique instance of the class.
import pytest class TestSets: # 'Test' prefix is important # scope: the scope for which this fixture is shared; # "function" (default), "class", "module", "package", "session" @pytest.fixture(scope="class") def setA(self): # 'self' is the first argument in methods return set([1, 3, 5]) @pytest.fixture(scope="class") def setB(self): return set([2, 3]) def test_union(self, setA, setB): assert setA | setB == set([1, 2, 3, 5]) assert setA.union(setB) == set([1, 2, 3, 5]) def test_intersection(self, setA, setB): assert setA & setB == set([3]) assert setA.intersection(setB) == set([3]) def test_difference(self, setA, setB): assert setA - setB == set([1, 5]) assert setA.difference(setB) == set([1, 5])