Testing

Tools, patterns and good practices

by Julius Seporaitis

Limitations and Assumptions

  • Python only. Some ideas may be applicable elsewhere.
  • My knowledge and experience. These are representative examples from codebases I worked with, links will be provided where possible.
  • Your active curiosity will lead to drilling in to understand better and trial things out in practice.

Agenda

  • Best Practice
  • External Services
  • AWS
  • Mocks
  • Django
  • Pytest
  • Structlog
  • GraphQL

Best Practice*

(* YMMV)

Learn Your Tools - Tox

[tox]
envlist = py3,lint

# ...

[testenv]
commands = pytest {posargs}

# ...

[testenv:lint]
commands = multilint {posargs}

Learn Your Tools - Tox

List of environments

[tox]
envlist = py3,lint
# ...

Executed in that order when tox runs.

Learn Your Tools - Tox

Shared configuration

[testenv]
commands = pytest {posargs}
# ...

Executed in all environments (unless overridden).

Learn Your Tools - Tox

Specific environment configuration

[testenv:lint]
commands = multilint {posargs}
# ...

Learn Your Tools - Tox

# run just pytest suite
$ tox -e py3

# run just lint
$ tox -e lint

# recreate (all) tox environments
$ tox -r

# recreate just testing environment
$ tox -re py3

Learn Your Tools - Pytest

# run individual test module
$ pytest path/to/test_module.py

# run single test case
$ pytest path/to/test_module.py::test_function

# verbose assert comparison
$ pytest path/to/test_module.py -vv

Learn Your Tools - Pytest

pytest ./examples/test_verbose.py

    def test_foo():
        data = [
            {"a": "A", "b": "B"},
            {"c": "C", "d": "D"},
        ]

        expected = [
            {"c": "C", "d": "D"},
            {"e": "E", "f": "F"},
        ]
assert data == expected
E       AssertionError: assert [{'a': 'A', '...C', 'd': 'D'}] == [{'c': 'C', '...E', 'f': 'F'}]
E         At index 0 diff: {'a': 'A', 'b': 'B'} != {'c': 'C', 'd': 'D'}
E         Use -v to get the full diff

examples/test_verbose.py:12: AssertionError
======================================== short test summary info ========================================

Learn Your Tools - Pytest

pytest ./examples/test_verbose.py -vv
_______________________________________________ test_foo ________________________________________________

    def test_foo():
        data = [
            {"a": "A", "b": "B"},
            {"c": "C", "d": "D"},
        ]

        expected = [
            {"c": "C", "d": "D"},
            {"e": "E", "f": "F"},
        ]
assert data == expected
E       AssertionError: assert [{'a': 'A', 'b': 'B'}, {'c': 'C', 'd': 'D'}] == [{'c': 'C', 'd': 'D'}, {'e': 'E', 'f': 'F'}]
E         At index 0 diff: {'a': 'A', 'b': 'B'} != {'c': 'C', 'd': 'D'}
E         Full diff:
E         - [{'c': 'C', 'd': 'D'}, {'e': 'E', 'f': 'F'}]
E         + [{'a': 'A', 'b': 'B'}, {'c': 'C', 'd': 'D'}]

examples/test_verbose.py:12: AssertionError
======================================== short test summary info ========================================
FAILED examples/test_verbose.py::test_foo - AssertionError: assert [{'a': 'A', 'b': 'B'}, {'c': 'C', '...

Learn Your Tools - Pytest

def test_foo():
  print("hello")
========================================== test session starts ==========================================
platform linux -- Python 3.7.5, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
Using --randomly-seed=1587314423
rootdir: /home/julius/code/testing-slides
plugins: requests-mock-1.7.0, randomly-3.3.1
collecting ...
collected 1 item

examples/test_output.py hello
.

=========================================== 1 passed in 0.01s ===========================================

Learn Your Tools - Pytest

pytest ./examples/test_debugger.py -s --pdb

    def test_faillure():
        data = {"a": "A", "b": "B"}
assert "c" in data
E       AssertionError: assert 'c' in {'a': 'A', 'b': 'B'}

examples/test_debugger.py:4: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
/home/julius/code/testing-slides/examples/test_debugger.py(4)test_faillure()
assert "c" in data
(Pdb) dict_keys(['a', 'b'])
(Pdb)

======================================== short test summary info ========================================
FAILED examples/test_debugger.py::test_faillure - AssertionError: assert 'c' in {'a': 'A', 'b': 'B'}
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! _pytest.outcomes.Exit: Quitting debugger !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================== 1 failed in 0.18s ===========================================

Learn Your Tools - Pytest

pytest ./examples/test_randomly.py
========================================== test session starts ==========================================
platform linux -- Python 3.7.5, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
Using --randomly-seed=1587314424
rootdir: /home/julius/code/testing-slides
pytest ./examples/test_randomly.py --randomly-seed=1587....

Learn Your Tools - Pytest

# no coverage report
pytest --no-cov

# don't rebuild database
pytest --reuse-db

Learn Your Tools - Putting Together

Remember {posargs}?

[tox]
envlist = py3,lint
# ...

[testenv]
commands = pytest {posargs}
# ...
# run single test
$ tox -e py3 -- path/test_file.py::test_function_name

# run interactive debugger
$ tox -e py3 -- -s --pdb

# no test coverage {posargs}
$ tox -e py3 -- --no-cov

# don’t rebuild database each time (more later)
$ tox -e py3 -- --reuse-db

Avoid Module Level Constants

Module level constants are okay, but only if it is not changed and guaranteed to be immutable and plain Python dictionary never is. Therefore, don't do this:

VAL = {"a": "A"}


def test_valb():
  data = VAL
  data ["b"] = "B"

  assert data == {"a": "A", "b": "B"}


def test_valc():
  data = VAL
  data ["c"] = "C"

  assert data == {"a": "A", "c": "C"}

Avoid Module Level Constants

_______________________________________________ test_valb _______________________________________________

    def test_valb():
        data = VAL
        data["b"] = "B"
assert data == {"a": "A", "b": "B"}
E       AssertionError: assert {'a': 'A', 'b': 'B', 'c': 'C'} == {'a': 'A', 'b': 'B'}
E         Omitting 2 identical items, use -vv to show
E         Left contains 1 more item:
E         {'c': 'C'}
E         Use -v to get the full diff

examples/test_constants.py:8: AssertionError

Prefer Fixtures

@pytest.fixture
def val():
  return {"a": "b"}


def test_valb(val):
  val["b"] = "B"

  assert val == {"a": "A", "b": "B"}


def test_valc(val):
  val["c"] = "C"

  assert val == {"a": "A", "c": "C"}

Builder (Data)

If adjusting fixture happens very often…

@pytest.fixture
def val():
  return {"a": "b"}


def test_valb(val):
  val["b"] = "B"
  val["c"] = "C"

  assert val == {"a": "A", "b": "B"}

Builder (Data)

… consider writing a builder fixture

@pytest.fixture
def build_val():
  def builder(*kwargs):
    base = {"a": "A"}
    return dict(base, **kwargs)

  return builder

def test_valb(build_val):
  val = build_val(b="B", c="C")

  assert val == {"a": "A", "b": "B", "c": "C"}

Builder (Code)

@pytest.fixture
def add_glue_response(aws_glue_stub, build_glue_job_run_response, task):
  def builder(job_run=None):
    job_run = job_run or build_glue_job_run_response()
    aws_glue_stub.add_response(
      "get_job_run",
      service_response={"JobRun": job_run},
      expected_params={"JobName": task.JOB_NAME, "RunId": "jr_testrun"},
    )
    return job_run
  return builder

def test_run__success(add_glue_response, task):
  add_glue_response()
  task.run()

Limit Fixture Scope

  • If there is a big data fixture in conftest.py shared among different test modules…
  • … consider putting customizations in each individual test module, using builder fixtures (shown earlier) or pytest fixture overrides (explained later).
  • The purpose - prevent changing a fixture breaking many tests across the whole package.

Long Descriptive Names

Different naming conventions apply to tests - be verbose and descriptive.

My current preference: test_function_name__outcome__modifier

  • test_run__success__with_defaults
  • test_run__success__with_params
  • test_run__failure__missing_inputs
  • test_run__failure__aws_client_error

Break Tests Before Context Switching

When working on a bigger piece of code and having to context switch - write a test about what you want to develop next.

It'll help get back up to speed.

Links

External Services*

(*not AWS)

Everything from here onwards is still WIP.

Vcrpy

  • https://vcrpy.readthedocs.io/
  • Records live HTTP requests and responses.
  • Stores them in “cassette” files - JSON or YAML, or …
  • Next time code runs - returns the stored response.
  • Helps if test suite relies on expensive API calls.
  • Helps if you have the service locally, but not on CI.

Responses

  • https://github.com/getsentry/responses
  • Mocks out requests library.
  • Useful for raising exceptions (e.g.: ConnectionError).
  • Useful for simple request/response interactions.
  • Supports dynamic responses.

Requests-Mock

AWS

Botocore Stubber

Patterns - Simple

Patterns - Builder

Avoid "Just" Functions

Always Test Errors

Mocks*

(*and Gotchas)

Mock

Monkeypatch

Test Interfaces

(use autospec=True)

mock.ANY verbose output

Django

Save Time By Reusing Database

# save startup time
$ pytest --reuse-db
$ tox -e py3 -- --reuse-db

Pytest

Fixtures

Conftest.py

Fixture Overrides

Reuse Decorators

Raises

Structlog

Avoid Testing Logging

But If You Have To

Structlog Gotchas

Test minimum set of details.

GraphQL

File Uploads

Internal Errors

The End

Questions and suggestions?