## Contents - [Introduction](#Introduction) - [3A](#3A) - [Pytest Usage](#Pytest-usage) - [Install](#Install) - [Naming](#Naming) - [Run Pytest](#Run-Pytest) - [Pytest Options](#Pytest-Options) - [pytest.ini](#pytest.ini) - [More Pytest Usage](#More-Pytest-Usage) - [Test Error Handling](#Test-Error-Handling) - [pytest.raise](#pytest.raise) - [xfail mark](#xfail-mark) - [Fixture](#Fixture) - [Parametrize](#Parametrize) - [References](#References) ## Introduction When writing programs, it is hardly to write bug-free codes, so we often need to do some testings to ensure our code is OK. Let's look at an example first, suppose we have a function `add_int()` that adds up two numbers and return the sum. ```python= def add_int(a: int, b: int) -> int: return a + b ``` A naive way to write is ```python= a = 9 b = 10 if add_int(a, b) == 19: print("OK") else: print("Wrong") ``` The advanage is that it is easy to write, but it suffers many downsides - Messy (unorganized) - Coupling (改A壞B) - Difficult to use - when the codebase becomes large - when the inputs are complicated objects For these reasons, we need to use testing frameworks to build automatic testing. ### 3A When testing, we are **running multiple testing functions**. Each testing function is a **unit test** that follows the 3A pattern. In each unit test, we will - **Arrange**: prepare the **input** and the **ground truth** - **Act**: run the **functions to be tested** - **Assert**: compare the **output** with the **ground truth** ```python= def test_add_int(): # arrange a, b = 9, 10 expected = 19 # act output = add_int(a, b) # assert assert output == expected ``` ## Pytest usage ### Install Install the following package by `pip` ``` pytest pytest-mock pytest-cov requests-mock ``` ### Naming Typically, we will put source code in `src` and testing codes in `test`. For the last example, we may have directories like ``` . ├── __init__.py ├── src │   ├── __init__.py │   └── add.py └── tests ├── __init__.py └── test_add.py ``` Below are the naming rules for `pytest` to recognize. - file: start with `test_` - class: starts with `Test` - function: start with `test` ### Run Pytest Once we have our testing codes, we can run `pytest` on the terminal. In the last example, we have ```python= # test_add.py from ..src import add_int def test_add_int(): a, b = 9, 10 expected = 19 output = add_int(a, b) assert output == expected ``` In the terminal, run ```shell= $ pytest ``` **`pytest` will search under the current working directory, recognize those functions prefixed with `test_` to be unit tests, and run them.** And then we get the report ![](https://hackmd.io/_uploads/HycyKJXF3.png) ### Pytest Options Some useful options are provided when running `pytest`. #### `-v` or `-vv` Increase verbosity ![](https://hackmd.io/_uploads/rkN25J7tn.png) #### `-cov src` Tells us how many fractions of code under the directory `src` is runned by the tests. ![](https://hackmd.io/_uploads/BJBHskQFh.png) #### `-m mark` Group (Mark) tests, so we can selectively run them. In the unit tests, we use the decorator to **mark** the test. ```python= # test_add.py import pytest from ..src import add_int @pytest.mark.add def test_add_int(): a, b = 9, 10 expected = 19 output = add_int(a, b) assert output == expected @pytest.mark.sub def test_sub_int(): pass ``` In the terminal, we run `pytest` by specifying the marks after `-m` flag. ```bash= $ pytest -v -m add ``` ![](https://hackmd.io/_uploads/rJceRJ7Yh.png) #### `-s` Enable print() inside unit tests. If not specified, print() has no effect. Pytest captures stdout and stderr from the tests, `-s` is a shorthand for `--capture=no`. ### `pytest.ini` A configuration file for pytest runtime. Put all config and options together. Inside `pytest.ini` ```python= [pytest] addopts = -vv -m add --cov src/ markers = # mark: description add: features related to add sub: features related to sub ``` - `addopts`: options to use when running pytest - markers: registered markers Then run ```bash= $ pytest ``` ![](https://hackmd.io/_uploads/Skb_cOUch.png) ### What can we improve ? - test error-handling - data preparation ## More pytest usage ### Test Error Handling #### pytest.raise When developing new features, we will add error handling code in case exceptional events happen. When inputting wrong data, we will expect the code to catch the abnormal behavior and throw the corresponding error. How do we reflect this on the unit test? Pytest provides a context manager which tells the unit test about the error it need to get. Upon getting the specified error, the unit test will pass. ```python= import pytest @pytest.mark.add def test_add_int_exception(): a, b = 9, "a" with pytest.raises(TypeError): output = add_int(a, b) assert output == expected ``` #### xfail mark Another way to write tests that **expect failure to happen** is by using the built-in pytest mark **xfail**. ```py= import pytest @pytest.mark.xfail def test_xfail(): assert 0 ``` The result will look like ![](https://hackmd.io/_uploads/HJzjTsPq2.png) If some test with xfail mark passes, it will show **xpass** ```py= import pytest @pytest.mark.xfail def test_xfail(): pass ``` ![](https://hackmd.io/_uploads/rkEXAiD5n.png) ### Fixture Fixtures are functions that prepare data. We can put fixtures as arguments in the unit tests. When running pytest, it will first check the parameters in the test functions, and find the fixtures having the same names as those parameters. Once pytest find them, it will **request** those fixtures to get data as input. (i.e. run those fixture functions and pass what's returned to the test functions.) For example, pytest has a built-in fixture `tmp_path`, which returns a `pathlib.Path` to a temporary directory. Inside `test_builtin_fixture.py`, ```python= import pytest import pathlib # pass def test_create_file(tmp_path: pathlib.Path): d = tmp_path / "sub" d.mkdir() data = "HELLO" p = d / "hello.txt" p.write_text(data) assert p.read_text() == data assert len(list(tmp_path.iterdir())) == 1 ``` We need fixture for - reusability for other tests - getting rid of dependencies required by the input #### Custom fixture Pytest provides [a list of built-in fixtures](https://docs.pytest.org/en/6.2.x/fixture.html), and we can also write our custom fixtures. To write a custom fixture, we only need to add a decorator `pytest.fixture`. For example, if we want to make a fixture that output `"banana"` ```python= import pytest @pytest.fixture def banana(): return "banana" def test_have_banana(banana): assert banana == "banana" ``` The above is the same as ```python= def banana(): return "banana" def test_have_banana(banana): assert banana == "banana" b = banana() test_have_banana(banana=b) ``` #### Fixture requesting other fixtures Fixtures can also request other fixtures (i.e. depend on other fixtures). For example ```py= import pytest @pytest.fixture def empty_list(): print("empty_list") return [] @pytest.fixture def singleton_list(empty_list): empty_list.append(1) return empty_list # PASS def test_singleton_list(singleton_list): assert singleton_list == [1] ``` It is the same as ```python= def empty_list(): print("empty_list") return [] def singleton_list(empty_list): empty_list.append(1) return empty_list def test_singleton_list(singleton_list): assert singleton_list == [1] el = empty_list() sl = singleton_list(empty_list=el) test_singleton_list(singleton_list=sl) ``` #### Fixture cache Each test can also request multiple fixtures. But note that **each test will request each fixture exactly once**. If a fixture *A* is required by another fixture *B* and the test *T*, then we will - instantiate *A* - feed the instance of *A* to instantiate *B* - feed the same instance of *A* and *B* to *T* ![](https://hackmd.io/_uploads/BkD7kyd92.png) Below is an example that shows an experiment ```py= import pytest @pytest.fixture def empty_list(): print("empty_list") return [] @pytest.fixture def singleton_list(empty_list): empty_list.append(1) return empty_list # PASS def test_empty_list_1(empty_list): assert empty_list == [] # FAIL # due to the side effect of instantiating singleton_list def test_empty_list_2(empty_list, singleton_list): assert empty_list == [] ``` #### Common fixture Some fixtures are used often by many test files, we can collect those fixtures into the file `conftest.py`. Every unit test can use fixtures inside `conftest.py` **without importing**. #### Scope Usually, each test will instantiate its own copy of fixtures. But we can make fixture shared across different scope. - function (default) - class - module - package - session For example, we have a fixture in `conftest.py` ```python= import pytest @pytest.fixture(scope="module") def double_list(): return [1, 2] ``` and we have two tests in `test_scope.py` ```python= # PASS def test_double_list1(double_list): double_list.append(3) assert double_list == [1, 2, 3] # FAIL def test_double_list2(double_list): double_list.append(4) assert double_list == [1, 2, 4] ``` If we change the scope back to function, both tests will pass. ### Parametrize If we want to run the same test functions with different input data (even different marks), we could write like ```python= def test_add_int(): # input 1 a, b = 0, 19 expected = 19 assert add_int(a, b) == expected # input 2 a, b = 9, "a" expected = 1 with pytest.raises(TypeError): output = add_int(a, b) assert output == expected ``` However, this causes bloating codes as we can keep adding test cases by copy-pasting. A more elegant way to run tests with different test cases is by **parametrization**. ```python= data = [ pytest.param(0, 19, 19), pytest.param(9, "a", None, marks=pytest.mark.xfail) ] @pytest.mark.parametrize("nums", data, indirect=False) def test_add_int(nums): a, b, expected = nums assert add_int(a, b) == expected ``` There are two ways of parametrization, **direct** and **indirect**. #### Direct Parametrization To parametrize tests, we use `pytest.mark.parametrize`. There are 3 key parameters when using this function. - `argnames` (str) - `argvalues` (list of datapoint) - `indirect` (bool) When doing direct parametrization, we will set `indirect=False`. `argnames` specifies the parameters of the testing functions, and `argvalues` are the value fed to them. ```python= import pytest labeled_fruits = [ ("apple", 5), ("banana", 4), ("grape", 7), ] @pytest.mark.parametrize("fruit, label", labeled_fruits) def test_direct(fruit, label): # apple 5 # banana 4 # grape 7 print(fruit, label) ``` #### Indirect Parametrization The difference between direct and indirect is whether fixtures are used. When indirect is set true, `argnames` specifies the fixtures to be used, and we will pass `argvalues` to those fixtures. After requesting the fixtures, we will pass the results back to test functions whose parameters must have the same name as fixtures. ```python= import pytest labeled_fruits = [ ("apple", 5), ("banana", 4), ("grape", 7), ] @pytest.fixture def fruit(request): # request is a built-in fixture return request.param @pytest.fixture def label(request): # request is a built-in fixture return request.param @pytest.mark.parametrize("fruit, label", labeled_fruits, indirect=True) def test_indirect2(label, fruit): # note the order of arguments print(fruit, label) ``` ## References - [Full pytest documentation](https://docs.pytest.org/en/7.1.x/contents.html) - [Pytest 101 - 給 Python 開發者的測試入門](https://www.minglunwu.com/notes/2022/pytest_101.html) - [pytest fixtures: explicit, modular, scalable](https://docs.pytest.org/en/6.2.x/fixture.html) - [11、Pytest之@pytest.mark.parametrize使用详解](https://blog.csdn.net/totorobig/article/details/112235358)