summaryrefslogtreecommitdiff
path: root/lib/testtools/MANUAL
blob: 1a43e70f23713fd85d51a93e1da873f9badaee4a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
======
Manual
======

Introduction
------------

This document provides overview of the features provided by testtools.  Refer
to the API docs (i.e. docstrings) for full details on a particular feature.

Extensions to TestCase
----------------------

Controlling test execution
~~~~~~~~~~~~~~~~~~~~~~~~~~

Testtools supports two ways to control how tests are executed. The simplest
is to add a new exception to self.exception_handlers::

    >>> self.exception_handlers.insert(-1, (ExceptionClass, handler)).

Having done this, if any of setUp, tearDown, or the test method raise
ExceptionClass, handler will be called with the test case, test result and the
raised exception.

Secondly, by overriding __init__ to pass in runTest=RunTestFactory the whole
execution of the test can be altered. The default is testtools.runtest.RunTest
and calls  case._run_setup, case._run_test_method and finally
case._run_teardown. Other methods to control what RunTest is used may be
added in future.


TestCase.addCleanup
~~~~~~~~~~~~~~~~~~~

addCleanup is a robust way to arrange for a cleanup function to be called
before tearDown.  This is a powerful and simple alternative to putting cleanup
logic in a try/finally block or tearDown method.  e.g.::

    def test_foo(self):
        foo.lock()
        self.addCleanup(foo.unlock)
        ...

Cleanups can also report multiple errors, if appropriate by wrapping them in
a testtools.MultipleExceptions object::

    raise MultipleExceptions(exc_info1, exc_info2)


TestCase.addOnException
~~~~~~~~~~~~~~~~~~~~~~~

addOnException adds an exception handler that will be called from the test
framework when it detects an exception from your test code. The handler is
given the exc_info for the exception, and can use this opportunity to attach
more data (via the addDetails API) and potentially other uses.


TestCase.patch
~~~~~~~~~~~~~~

``patch`` is a convenient way to monkey-patch a Python object for the duration
of your test.  It's especially useful for testing legacy code.  e.g.::

    def test_foo(self):
        my_stream = StringIO()
        self.patch(sys, 'stderr', my_stream)
        run_some_code_that_prints_to_stderr()
        self.assertEqual('', my_stream.getvalue())

The call to ``patch`` above masks sys.stderr with 'my_stream' so that anything
printed to stderr will be captured in a StringIO variable that can be actually
tested. Once the test is done, the real sys.stderr is restored to its rightful
place.


TestCase.skipTest
~~~~~~~~~~~~~~~~~

``skipTest`` is a simple way to have a test stop running and be reported as a
skipped test, rather than a success/error/failure. This is an alternative to
convoluted logic during test loading, permitting later and more localized
decisions about the appropriateness of running a test. Many reasons exist to
skip a test - for instance when a dependency is missing, or if the test is
expensive and should not be run while on laptop battery power, or if the test
is testing an incomplete feature (this is sometimes called a TODO). Using this
feature when running your test suite with a TestResult object that is missing
the ``addSkip`` method will result in the ``addError`` method being invoked
instead. ``skipTest`` was previously known as ``skip`` but as Python 2.7 adds
``skipTest`` support, the ``skip`` name is now deprecated (but no warning
is emitted yet - some time in the future we may do so).


New assertion methods
~~~~~~~~~~~~~~~~~~~~~

testtools adds several assertion methods:

 * assertIn
 * assertNotIn
 * assertIs
 * assertIsNot
 * assertIsInstance
 * assertThat


Improved assertRaises
~~~~~~~~~~~~~~~~~~~~~

TestCase.assertRaises returns the caught exception.  This is useful for
asserting more things about the exception than just the type::

        error = self.assertRaises(UnauthorisedError, thing.frobnicate)
        self.assertEqual('bob', error.username)
        self.assertEqual('User bob cannot frobnicate', str(error))


TestCase.assertThat
~~~~~~~~~~~~~~~~~~~

assertThat is a clean way to write complex assertions without tying them to
the TestCase inheritance hierarchy (and thus making them easier to reuse).

assertThat takes an object to be matched, and a matcher, and fails if the
matcher does not match the matchee.

See pydoc testtools.Matcher for the protocol that matchers need to implement.

testtools includes some matchers in testtools.matchers.
python -c 'import testtools.matchers; print testtools.matchers.__all__' will
list those matchers.

An example using the DocTestMatches matcher which uses doctests example
matching logic::

    def test_foo(self):
        self.assertThat([1,2,3,4], DocTestMatches('[1, 2, 3, 4]'))


Creation methods
~~~~~~~~~~~~~~~~

testtools.TestCase implements creation methods called ``getUniqueString`` and
``getUniqueInteger``.  See pages 419-423 of *xUnit Test Patterns* by Meszaros
for a detailed discussion of creation methods.


Test renaming
~~~~~~~~~~~~~

``testtools.clone_test_with_new_id`` is a function to copy a test case
instance to one with a new name.  This is helpful for implementing test
parameterization.


Extensions to TestResult
------------------------

TestResult.addSkip
~~~~~~~~~~~~~~~~~~

This method is called on result objects when a test skips. The
``testtools.TestResult`` class records skips in its ``skip_reasons`` instance
dict. The can be reported on in much the same way as succesful tests.


TestResult.time
~~~~~~~~~~~~~~~

This method controls the time used by a TestResult, permitting accurate
timing of test results gathered on different machines or in different threads.
See pydoc testtools.TestResult.time for more details.


ThreadsafeForwardingResult
~~~~~~~~~~~~~~~~~~~~~~~~~~

A TestResult which forwards activity to another test result, but synchronises
on a semaphore to ensure that all the activity for a single test arrives in a
batch. This allows simple TestResults which do not expect concurrent test
reporting to be fed the activity from multiple test threads, or processes.

Note that when you provide multiple errors for a single test, the target sees
each error as a distinct complete test.


TextTestResult
~~~~~~~~~~~~~~

A TestResult that provides a text UI very similar to the Python standard
library UI. Key differences are that its supports the extended outcomes and
details API, and is completely encapsulated into the result object, permitting
it to be used without a 'TestRunner' object. Not all the Python 2.7 outcomes
are displayed (yet). It is also a 'quiet' result with no dots or verbose mode.
These limitations will be corrected soon.


Test Doubles
~~~~~~~~~~~~

In testtools.testresult.doubles there are three test doubles that testtools
uses for its own testing: Python26TestResult, Python27TestResult,
ExtendedTestResult. These TestResult objects implement a single variation of
the TestResult API each, and log activity to a list self._events. These are
made available for the convenience of people writing their own extensions.


startTestRun and stopTestRun
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Python 2.7 added hooks 'startTestRun' and 'stopTestRun' which are called
before and after the entire test run. 'stopTestRun' is particularly useful for
test results that wish to produce summary output.

testtools.TestResult provides empty startTestRun and stopTestRun methods, and
the default testtools runner will call these methods appropriately.


Extensions to TestSuite
-----------------------

ConcurrentTestSuite
~~~~~~~~~~~~~~~~~~~

A TestSuite for parallel testing. This is used in conjuction with a helper that
runs a single suite in some parallel fashion (for instance, forking, handing
off to a subprocess, to a compute cloud, or simple threads).
ConcurrentTestSuite uses the helper to get a number of separate runnable
objects with a run(result), runs them all in threads using the
ThreadsafeForwardingResult to coalesce their activity.


Running tests
-------------

Testtools provides a convenient way to run a test suite using the testtools
result object: python -m testtools.run testspec [testspec...].

Test discovery
--------------

Testtools includes a backported version of the Python 2.7 glue for using the
discover test discovery module. If you either have Python 2.7/3.1 or newer, or
install the 'discover' module, then you can invoke discovery::

    python -m testtools.run discover [path]

For more information see the Python 2.7 unittest documentation, or::

    python -m testtools.run --help