Python automated testing 101

by Roman Podoliaka (

Kharkiv, May the 12th of 2016


Why automate your test cases?

  • (open source) projects do not necessarily have dedicated QA engineers

  • projects are (usually) complex and time of engineers is precious

  • regression testing

  • test driven development

Automated testing

Test cases are usually written by software engineers:

  • Unit (individual functions or classes)

  • Integration (modules or logical subsystems)

Test cases are usually written by QA engineers:

  • Functional (checking a program against design specifications)

  • System (checking a program against system requirements)

In [4]:
def binary_search(arr, key):
    return _binary_search(arr, key, 0, len(arr))

def _binary_search(arr, key, left, right):
    if left >= right:
    middle = left + int((right - left) / 2)
    if arr[middle] == key:
        return middle
    elif arr[middle] < key:
        return _binary_search(arr, key, left, middle)
        return _binary_search(arr, key, middle, right)

A test case example

In [5]:
import unittest

class TestBinarySearch(unittest.TestCase):
    def setUp(self):
        self.empty = []
        self.arr = list(range(10))
    def test_empty_arr(self):
        self.assertIsNone(binary_search(self.empty, 42))
    def test_key_not_found(self):
        self.assertIsNone(binary_search(self.arr, 42))
In [6]:
suite = unittest.TestSuite()

Ran 2 tests in 0.003s

<unittest.runner.TextTestResult run=2 errors=0 failures=0>

pytest: test runner

  • pytest is the most popular test runner for Python

  • discovers and runs test cases

  • supports advanced features (e.g. test fixtures, running tests in parallel, running only failed test cases, etc)

(.venv3)Romans-Air:code malor$ py.test -v 
== test session starts 

platform darwin -- Python 3.4.3, pytest-2.9.1, py-1.4.30, pluggy-0.3.1 -- /Users/malor/.venv3/bin/python3.4
cachedir: .cache
rootdir: /Users/malor/Dropbox/talks/autotesting/code, inifile: 
collected 2 items PASSED PASSED

== 2 passed in 0.05 seconds

coverage: measuring of test coverage

  • coverage provides reporting on what lines of code have actually been executed

  • allows you to understand which parts of the code are covered by tests and which are not

  • test coverage is a measurable and comparable metric: a high test coverage value is required (but not enough) to make sure code works as expected

(.venv3)Romans-Air:code malor$ coverage report -m
Name        Stmts   Miss Branch BrPart  Cover   Missing
-------------------------------------------------------      23      3      8      3    81%   18, 22, 40, 17->18, 19->22, 39->40

tox: manager of virtual environments

  • tox standartizes the way tests are run for Python projects - it's as simple as:
$ tox
  • allows to define and manage multiple virtual environments (e.g. different Python versions or a separate virtual environment for building docs):
$ tox -epep8,py27,py35
envlist = pep8,py27,py35

deps = pytest
commands = py.test --cov --cov-append

deps = flake8
commands = flake8
(.venv3)Romans-Air:code malor$ tox
___________________________________________________________________ summary 
  pep8: commands succeeded
  py27: commands succeeded
  py35: commands succeeded
  congratulations :)

CI (continuous integration)

  • testing is only efficient, if it's enforced (e.g. so that one can't "forget" to run the tests)

  • CI is intended to automate all the required build steps, including testing

  • CI can (and should) be run in "gating" mode: a commit can not be merged, if tests fail

Travis CI

sudo: false
language: python
python: 3.5

  - mongodb

  # greenlet is needed to submit concurrency=greenlet based coverage
  # to service
  - pip install tox coveralls greenlet

  - tox

  - coveralls

  email: false



  • a comprehensive solution for CI/CD

  • a number of plugins for automation of builds steps

  • number #1 choice when you need a custom CI

  • can be configured declaratively by the means of Jenkins Job Builder

<?xml version='1.0' encoding='UTF-8'?>
  <scm class="hudson.scm.NullSCM"/>
  <triggers class="vector"/>
      <command># Move into the jenkins directory
cd /var/lib/jenkins

#Add all top level xml files.
git add *.xml

# Add all job config.xml files.
git add jobs/*/config.xml

# Add all user config.xml files.
git add users/*/config.xml

# Add all user content files.
git add userContent/*

# Remove files from the remote repo that have been removed locally.
COUNT=`git log --pretty=format: --name-only --diff-filter=B | wc -l`
if [ $COUNT -ne 0 ]
  then git log --pretty=format: --name-only --diff-filter=B | xargs git rm

# Commit the differences
git commit -a -m &quot;Automated commit of jenkins chaos&quot;

# Push the commit up to the remote repository.
git push origin master

- common-sync-params: &common-sync-params
    # Branches to sync (see also short names below)
    upstream-branch: 'stable/mitaka'
    downstream-branch: '9.0/mitaka'
    fallback-branch: 'master'

    # Branch short names for jobs naming
    src-branch: mitaka
    dst-branch: 9.0

    # Syncronization schedule
    sync-schedule: 'H 3 * * *'  # every day at 3:XX am, spread evenly

    # Gerrit parameters
    gerrit-host: ''
    gerrit-port: '29418'
    gerrit-user: 'openstack-ci-mirrorer-jenkins'
    gerrit-creds: 'a4be8c41-43e0-4269-94d8-53133cfe3ae5'
    gerrit-topic: 'sync/stable/mitaka'

    name: 'sync-{name}-{src-branch}-{dst-branch}'
        - 'sync-{name}-{src-branch}-{dst-branch}'


  • “If it’s not tested, it’s broken.” (Monty Taylor)

  • test coverage is an important metric, but it's not the only one - use common sense!

  • writing tests is not always fun, but it makes your life easier

  • Travis CI is a good start, but sometimes a more complex or just custom configuration is needed - this is where Jenkins shines

Thank you!


Your feedback is welcome: @rpodoliaka or