summaryrefslogtreecommitdiff
path: root/lib/testtools/doc
diff options
context:
space:
mode:
authorJelmer Vernooij <jelmer@samba.org>2011-08-27 16:07:25 +0200
committerJelmer Vernooij <jelmer@samba.org>2011-08-27 16:07:25 +0200
commitdd56d27d74ad702803818237a2732d1e99b14da1 (patch)
treecc926bb235a7f96c1bc8e5a3644d19d306a719c2 /lib/testtools/doc
parentef3bb09db6f6985eac82f4e80259c44be6ca8c20 (diff)
downloadsamba-dd56d27d74ad702803818237a2732d1e99b14da1.tar.gz
samba-dd56d27d74ad702803818237a2732d1e99b14da1.tar.bz2
samba-dd56d27d74ad702803818237a2732d1e99b14da1.zip
testtools: Update to latest upstream snapshot.
Diffstat (limited to 'lib/testtools/doc')
-rw-r--r--lib/testtools/doc/Makefile89
-rw-r--r--lib/testtools/doc/_static/placeholder.txt0
-rw-r--r--lib/testtools/doc/_templates/placeholder.txt0
-rw-r--r--lib/testtools/doc/conf.py194
-rw-r--r--lib/testtools/doc/for-framework-folk.rst219
-rw-r--r--lib/testtools/doc/for-test-authors.rst1196
-rw-r--r--lib/testtools/doc/hacking.rst154
-rw-r--r--lib/testtools/doc/index.rst33
-rw-r--r--lib/testtools/doc/make.bat113
-rw-r--r--lib/testtools/doc/overview.rst96
10 files changed, 2094 insertions, 0 deletions
diff --git a/lib/testtools/doc/Makefile b/lib/testtools/doc/Makefile
new file mode 100644
index 0000000000..b5d07af57f
--- /dev/null
+++ b/lib/testtools/doc/Makefile
@@ -0,0 +1,89 @@
+# Makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = sphinx-build
+PAPER =
+BUILDDIR = _build
+
+# Internal variables.
+PAPEROPT_a4 = -D latex_paper_size=a4
+PAPEROPT_letter = -D latex_paper_size=letter
+ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+
+.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest
+
+help:
+ @echo "Please use \`make <target>' where <target> is one of"
+ @echo " html to make standalone HTML files"
+ @echo " dirhtml to make HTML files named index.html in directories"
+ @echo " pickle to make pickle files"
+ @echo " json to make JSON files"
+ @echo " htmlhelp to make HTML files and a HTML help project"
+ @echo " qthelp to make HTML files and a qthelp project"
+ @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+ @echo " changes to make an overview of all changed/added/deprecated items"
+ @echo " linkcheck to check all external links for integrity"
+ @echo " doctest to run all doctests embedded in the documentation (if enabled)"
+
+clean:
+ -rm -rf $(BUILDDIR)/*
+
+html:
+ $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
+ @echo
+ @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
+
+dirhtml:
+ $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
+ @echo
+ @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
+
+pickle:
+ $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
+ @echo
+ @echo "Build finished; now you can process the pickle files."
+
+json:
+ $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
+ @echo
+ @echo "Build finished; now you can process the JSON files."
+
+htmlhelp:
+ $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
+ @echo
+ @echo "Build finished; now you can run HTML Help Workshop with the" \
+ ".hhp project file in $(BUILDDIR)/htmlhelp."
+
+qthelp:
+ $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
+ @echo
+ @echo "Build finished; now you can run "qcollectiongenerator" with the" \
+ ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
+ @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/testtools.qhcp"
+ @echo "To view the help file:"
+ @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/testtools.qhc"
+
+latex:
+ $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+ @echo
+ @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
+ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
+ "run these through (pdf)latex."
+
+changes:
+ $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
+ @echo
+ @echo "The overview file is in $(BUILDDIR)/changes."
+
+linkcheck:
+ $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
+ @echo
+ @echo "Link check complete; look for any errors in the above output " \
+ "or in $(BUILDDIR)/linkcheck/output.txt."
+
+doctest:
+ $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
+ @echo "Testing of doctests in the sources finished, look at the " \
+ "results in $(BUILDDIR)/doctest/output.txt."
diff --git a/lib/testtools/doc/_static/placeholder.txt b/lib/testtools/doc/_static/placeholder.txt
new file mode 100644
index 0000000000..e69de29bb2
--- /dev/null
+++ b/lib/testtools/doc/_static/placeholder.txt
diff --git a/lib/testtools/doc/_templates/placeholder.txt b/lib/testtools/doc/_templates/placeholder.txt
new file mode 100644
index 0000000000..e69de29bb2
--- /dev/null
+++ b/lib/testtools/doc/_templates/placeholder.txt
diff --git a/lib/testtools/doc/conf.py b/lib/testtools/doc/conf.py
new file mode 100644
index 0000000000..de5fdd4224
--- /dev/null
+++ b/lib/testtools/doc/conf.py
@@ -0,0 +1,194 @@
+# -*- coding: utf-8 -*-
+#
+# testtools documentation build configuration file, created by
+# sphinx-quickstart on Sun Nov 28 13:45:40 2010.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#sys.path.append(os.path.abspath('.'))
+
+# -- General configuration -----------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = ['sphinx.ext.autodoc']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'testtools'
+copyright = u'2010, The testtools authors'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = 'VERSION'
+# The full version, including alpha/beta/rc tags.
+release = 'VERSION'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of documents that shouldn't be included in the build.
+#unused_docs = []
+
+# List of directories, relative to source directory, that shouldn't be searched
+# for source files.
+exclude_trees = ['_build']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. Major themes that come with
+# Sphinx are currently 'default' and 'sphinxdoc'.
+html_theme = 'default'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_use_modindex = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = ''
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'testtoolsdoc'
+
+
+# -- Options for LaTeX output --------------------------------------------------
+
+# The paper size ('letter' or 'a4').
+#latex_paper_size = 'letter'
+
+# The font size ('10pt', '11pt' or '12pt').
+#latex_font_size = '10pt'
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+ ('index', 'testtools.tex', u'testtools Documentation',
+ u'The testtools authors', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# Additional stuff for the LaTeX preamble.
+#latex_preamble = ''
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_use_modindex = True
diff --git a/lib/testtools/doc/for-framework-folk.rst b/lib/testtools/doc/for-framework-folk.rst
new file mode 100644
index 0000000000..a4b20f64ca
--- /dev/null
+++ b/lib/testtools/doc/for-framework-folk.rst
@@ -0,0 +1,219 @@
+============================
+testtools for framework folk
+============================
+
+Introduction
+============
+
+In addition to having many features :doc:`for test authors
+<for-test-authors>`, testtools also has many bits and pieces that are useful
+for folk who write testing frameworks.
+
+If you are the author of a test runner, are working on a very large
+unit-tested project, are trying to get one testing framework to play nicely
+with another or are hacking away at getting your test suite to run in parallel
+over a heterogenous cluster of machines, this guide is for you.
+
+This manual is a summary. You can get details by consulting the `testtools
+API docs`_.
+
+
+Extensions to TestCase
+======================
+
+Custom exception handling
+-------------------------
+
+testtools provides a way to control how test exceptions are handled. To do
+this, add a new exception to ``self.exception_handlers`` on a
+``testtools.TestCase``. For example::
+
+ >>> self.exception_handlers.insert(-1, (ExceptionClass, handler)).
+
+Having done this, if any of ``setUp``, ``tearDown``, or the test method raise
+``ExceptionClass``, ``handler`` will be called with the test case, test result
+and the raised exception.
+
+Use this if you want to add a new kind of test result, that is, if you think
+that ``addError``, ``addFailure`` and so forth are not enough for your needs.
+
+
+Controlling test execution
+--------------------------
+
+If you want to control more than just how exceptions are raised, you can
+provide a custom ``RunTest`` to a ``TestCase``. The ``RunTest`` object can
+change everything about how the test executes.
+
+To work with ``testtools.TestCase``, a ``RunTest`` must have a factory that
+takes a test and an optional list of exception handlers. Instances returned
+by the factory must have a ``run()`` method that takes an optional ``TestResult``
+object.
+
+The default is ``testtools.runtest.RunTest``, which calls ``setUp``, the test
+method, ``tearDown`` and clean ups (see :ref:`addCleanup`) in the normal, vanilla
+way that Python's standard unittest_ does.
+
+To specify a ``RunTest`` for all the tests in a ``TestCase`` class, do something
+like this::
+
+ class SomeTests(TestCase):
+ run_tests_with = CustomRunTestFactory
+
+To specify a ``RunTest`` for a specific test in a ``TestCase`` class, do::
+
+ class SomeTests(TestCase):
+ @run_test_with(CustomRunTestFactory, extra_arg=42, foo='whatever')
+ def test_something(self):
+ pass
+
+In addition, either of these can be overridden by passing a factory in to the
+``TestCase`` constructor with the optional ``runTest`` argument.
+
+
+Test renaming
+-------------
+
+``testtools.clone_test_with_new_id`` is a function to copy a test case
+instance to one with a new name. This is helpful for implementing test
+parameterization.
+
+
+Test placeholders
+=================
+
+Sometimes, it's useful to be able to add things to a test suite that are not
+actually tests. For example, you might wish to represents import failures
+that occur during test discovery as tests, so that your test result object
+doesn't have to do special work to handle them nicely.
+
+testtools provides two such objects, called "placeholders": ``PlaceHolder``
+and ``ErrorHolder``. ``PlaceHolder`` takes a test id and an optional
+description. When it's run, it succeeds. ``ErrorHolder`` takes a test id,
+and error and an optional short description. When it's run, it reports that
+error.
+
+These placeholders are best used to log events that occur outside the test
+suite proper, but are still very relevant to its results.
+
+e.g.::
+
+ >>> suite = TestSuite()
+ >>> suite.add(PlaceHolder('I record an event'))
+ >>> suite.run(TextTestResult(verbose=True))
+ I record an event [OK]
+
+
+Extensions to TestResult
+========================
+
+TestResult.addSkip
+------------------
+
+This method is called on result objects when a test skips. The
+``testtools.TestResult`` class records skips in its ``skip_reasons`` instance
+dict. The can be reported on in much the same way as succesful tests.
+
+
+TestResult.time
+---------------
+
+This method controls the time used by a ``TestResult``, permitting accurate
+timing of test results gathered on different machines or in different threads.
+See pydoc testtools.TestResult.time for more details.
+
+
+ThreadsafeForwardingResult
+--------------------------
+
+A ``TestResult`` which forwards activity to another test result, but synchronises
+on a semaphore to ensure that all the activity for a single test arrives in a
+batch. This allows simple TestResults which do not expect concurrent test
+reporting to be fed the activity from multiple test threads, or processes.
+
+Note that when you provide multiple errors for a single test, the target sees
+each error as a distinct complete test.
+
+
+MultiTestResult
+---------------
+
+A test result that dispatches its events to many test results. Use this
+to combine multiple different test result objects into one test result object
+that can be passed to ``TestCase.run()`` or similar. For example::
+
+ a = TestResult()
+ b = TestResult()
+ combined = MultiTestResult(a, b)
+ combined.startTestRun() # Calls a.startTestRun() and b.startTestRun()
+
+Each of the methods on ``MultiTestResult`` will return a tuple of whatever the
+component test results return.
+
+
+TextTestResult
+--------------
+
+A ``TestResult`` that provides a text UI very similar to the Python standard
+library UI. Key differences are that its supports the extended outcomes and
+details API, and is completely encapsulated into the result object, permitting
+it to be used without a 'TestRunner' object. Not all the Python 2.7 outcomes
+are displayed (yet). It is also a 'quiet' result with no dots or verbose mode.
+These limitations will be corrected soon.
+
+
+ExtendedToOriginalDecorator
+---------------------------
+
+Adapts legacy ``TestResult`` objects, such as those found in older Pythons, to
+meet the testtools ``TestResult`` API.
+
+
+Test Doubles
+------------
+
+In testtools.testresult.doubles there are three test doubles that testtools
+uses for its own testing: ``Python26TestResult``, ``Python27TestResult``,
+``ExtendedTestResult``. These TestResult objects implement a single variation of
+the TestResult API each, and log activity to a list ``self._events``. These are
+made available for the convenience of people writing their own extensions.
+
+
+startTestRun and stopTestRun
+----------------------------
+
+Python 2.7 added hooks ``startTestRun`` and ``stopTestRun`` which are called
+before and after the entire test run. 'stopTestRun' is particularly useful for
+test results that wish to produce summary output.
+
+``testtools.TestResult`` provides default ``startTestRun`` and ``stopTestRun``
+methods, and he default testtools runner will call these methods
+appropriately.
+
+The ``startTestRun`` method will reset any errors, failures and so forth on
+the result, making the result object look as if no tests have been run.
+
+
+Extensions to TestSuite
+=======================
+
+ConcurrentTestSuite
+-------------------
+
+A TestSuite for parallel testing. This is used in conjuction with a helper that
+runs a single suite in some parallel fashion (for instance, forking, handing
+off to a subprocess, to a compute cloud, or simple threads).
+ConcurrentTestSuite uses the helper to get a number of separate runnable
+objects with a run(result), runs them all in threads using the
+ThreadsafeForwardingResult to coalesce their activity.
+
+FixtureSuite
+------------
+
+A test suite that sets up a fixture_ before running any tests, and then tears
+it down after all of the tests are run. The fixture is *not* made available to
+any of the tests.
+
+.. _`testtools API docs`: http://mumak.net/testtools/apidocs/
+.. _unittest: http://docs.python.org/library/unittest.html
+.. _fixture: http://pypi.python.org/pypi/fixtures
diff --git a/lib/testtools/doc/for-test-authors.rst b/lib/testtools/doc/for-test-authors.rst
new file mode 100644
index 0000000000..eec98b14f8
--- /dev/null
+++ b/lib/testtools/doc/for-test-authors.rst
@@ -0,0 +1,1196 @@
+==========================
+testtools for test authors
+==========================
+
+If you are writing tests for a Python project and you (rather wisely) want to
+use testtools to do so, this is the manual for you.
+
+We assume that you already know Python and that you know something about
+automated testing already.
+
+If you are a test author of an unusually large or unusually unusual test
+suite, you might be interested in :doc:`for-framework-folk`.
+
+You might also be interested in the `testtools API docs`_.
+
+
+Introduction
+============
+
+testtools is a set of extensions to Python's standard unittest module.
+Writing tests with testtools is very much like writing tests with standard
+Python, or with Twisted's "trial_", or nose_, except a little bit easier and
+more enjoyable.
+
+Below, we'll try to give some examples of how to use testtools in its most
+basic way, as well as a sort of feature-by-feature breakdown of the cool bits
+that you could easily miss.
+
+
+The basics
+==========
+
+Here's what a basic testtools unit tests look like::
+
+ from testtools import TestCase
+ from myproject import silly
+
+ class TestSillySquare(TestCase):
+ """Tests for silly square function."""
+
+ def test_square(self):
+ # 'square' takes a number and multiplies it by itself.
+ result = silly.square(7)
+ self.assertEqual(result, 49)
+
+ def test_square_bad_input(self):
+ # 'square' raises a TypeError if it's given bad input, say a
+ # string.
+ self.assertRaises(TypeError, silly.square, "orange")
+
+
+Here you have a class that inherits from ``testtools.TestCase`` and bundles
+together a bunch of related tests. The tests themselves are methods on that
+class that begin with ``test_``.
+
+Running your tests
+------------------
+
+You can run these tests in many ways. testtools provides a very basic
+mechanism for doing so::
+
+ $ python -m testtools.run exampletest
+ Tests running...
+ Ran 2 tests in 0.000s
+
+ OK
+
+where 'exampletest' is a module that contains unit tests. By default,
+``testtools.run`` will *not* recursively search the module or package for unit
+tests. To do this, you will need to either have the discover_ module
+installed or have Python 2.7 or later, and then run::
+
+ $ python -m testtools.run discover packagecontainingtests
+
+For more information see the Python 2.7 unittest documentation, or::
+
+ python -m testtools.run --help
+
+As your testing needs grow and evolve, you will probably want to use a more
+sophisticated test runner. There are many of these for Python, and almost all
+of them will happily run testtools tests. In particular:
+
+* testrepository_
+* Trial_
+* nose_
+* unittest2_
+* `zope.testrunner`_ (aka zope.testing)
+
+From now on, we'll assume that you know how to run your tests.
+
+Running test with Distutils
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you are using Distutils_ to build your Python project, you can use the testtools
+Distutils_ command to integrate testtools into your Distutils_ workflow::
+
+ from distutils.core import setup
+ from testtools import TestCommand
+ setup(name='foo',
+ version='1.0',
+ py_modules=['foo'],
+ cmdclass={'test': TestCommand}
+ )
+
+You can then run::
+
+ $ python setup.py test -m exampletest
+ Tests running...
+ Ran 2 tests in 0.000s
+
+ OK
+
+For more information about the capabilities of the `TestCommand` command see::
+
+ $ python setup.py test --help
+
+You can use the `setup configuration`_ to specify the default behavior of the
+`TestCommand` command.
+
+Assertions
+==========
+
+The core of automated testing is making assertions about the way things are,
+and getting a nice, helpful, informative error message when things are not as
+they ought to be.
+
+All of the assertions that you can find in Python standard unittest_ can be
+found in testtools (remember, testtools extends unittest). testtools changes
+the behaviour of some of those assertions slightly and adds some new
+assertions that you will almost certainly find useful.
+
+
+Improved assertRaises
+---------------------
+
+``TestCase.assertRaises`` returns the caught exception. This is useful for
+asserting more things about the exception than just the type::
+
+ def test_square_bad_input(self):
+ # 'square' raises a TypeError if it's given bad input, say a
+ # string.
+ e = self.assertRaises(TypeError, silly.square, "orange")
+ self.assertEqual("orange", e.bad_value)
+ self.assertEqual("Cannot square 'orange', not a number.", str(e))
+
+Note that this is incompatible with the ``assertRaises`` in unittest2 and
+Python2.7.
+
+
+ExpectedException
+-----------------
+
+If you are using a version of Python that supports the ``with`` context
+manager syntax, you might prefer to use that syntax to ensure that code raises
+particular errors. ``ExpectedException`` does just that. For example::
+
+ def test_square_root_bad_input_2(self):
+ # 'square' raises a TypeError if it's given bad input.
+ with ExpectedException(TypeError, "Cannot square.*"):
+ silly.square('orange')
+
+The first argument to ``ExpectedException`` is the type of exception you
+expect to see raised. The second argument is optional, and can be either a
+regular expression or a matcher. If it is a regular expression, the ``str()``
+of the raised exception must match the regular expression. If it is a matcher,
+then the raised exception object must match it.
+
+
+assertIn, assertNotIn
+---------------------
+
+These two assertions check whether a value is in a sequence and whether a
+value is not in a sequence. They are "assert" versions of the ``in`` and
+``not in`` operators. For example::
+
+ def test_assert_in_example(self):
+ self.assertIn('a', 'cat')
+ self.assertNotIn('o', 'cat')
+ self.assertIn(5, list_of_primes_under_ten)
+ self.assertNotIn(12, list_of_primes_under_ten)
+
+
+assertIs, assertIsNot
+---------------------
+
+These two assertions check whether values are identical to one another. This
+is sometimes useful when you want to test something more strict than mere
+equality. For example::
+
+ def test_assert_is_example(self):
+ foo = [None]
+ foo_alias = foo
+ bar = [None]
+ self.assertIs(foo, foo_alias)
+ self.assertIsNot(foo, bar)
+ self.assertEqual(foo, bar) # They are equal, but not identical
+
+
+assertIsInstance
+----------------
+
+As much as we love duck-typing and polymorphism, sometimes you need to check
+whether or not a value is of a given type. This method does that. For
+example::
+
+ def test_assert_is_instance_example(self):
+ now = datetime.now()
+ self.assertIsInstance(now, datetime)
+
+Note that there is no ``assertIsNotInstance`` in testtools currently.
+
+
+expectFailure
+-------------
+
+Sometimes it's useful to write tests that fail. For example, you might want
+to turn a bug report into a unit test, but you don't know how to fix the bug
+yet. Or perhaps you want to document a known, temporary deficiency in a
+dependency.
+
+testtools gives you the ``TestCase.expectFailure`` to help with this. You use
+it to say that you expect this assertion to fail. When the test runs and the
+assertion fails, testtools will report it as an "expected failure".
+
+Here's an example::
+
+ def test_expect_failure_example(self):
+ self.expectFailure(
+ "cats should be dogs", self.assertEqual, 'cats', 'dogs')
+
+As long as 'cats' is not equal to 'dogs', the test will be reported as an
+expected failure.
+
+If ever by some miracle 'cats' becomes 'dogs', then testtools will report an
+"unexpected success". Unlike standard unittest, testtools treats this as
+something that fails the test suite, like an error or a failure.
+
+
+Matchers
+========
+
+The built-in assertion methods are very useful, they are the bread and butter
+of writing tests. However, soon enough you will probably want to write your
+own assertions. Perhaps there are domain specific things that you want to
+check (e.g. assert that two widgets are aligned parallel to the flux grid), or
+perhaps you want to check something that could almost but not quite be found
+in some other standard library (e.g. assert that two paths point to the same
+file).
+
+When you are in such situations, you could either make a base class for your
+project that inherits from ``testtools.TestCase`` and make sure that all of
+your tests derive from that, *or* you could use the testtools ``Matcher``
+system.
+
+
+Using Matchers
+--------------
+
+Here's a really basic example using stock matchers found in testtools::
+
+ import testtools
+ from testtools.matchers import Equals
+
+ class TestSquare(TestCase):
+ def test_square(self):
+ result = square(7)
+ self.assertThat(result, Equals(49))
+
+The line ``self.assertThat(result, Equals(49))`` is equivalent to
+``self.assertEqual(result, 49)`` and means "assert that ``result`` equals 49".
+The difference is that ``assertThat`` is a more general method that takes some
+kind of observed value (in this case, ``result``) and any matcher object
+(here, ``Equals(49)``).
+
+The matcher object could be absolutely anything that implements the Matcher
+protocol. This means that you can make more complex matchers by combining
+existing ones::
+
+ def test_square_silly(self):
+ result = square(7)
+ self.assertThat(result, Not(Equals(50)))
+
+Which is roughly equivalent to::
+
+ def test_square_silly(self):
+ result = square(7)
+ self.assertNotEqual(result, 50)
+
+
+Stock matchers
+--------------
+
+testtools comes with many matchers built in. They can all be found in and
+imported from the ``testtools.matchers`` module.
+
+Equals
+~~~~~~
+
+Matches if two items are equal. For example::
+
+ def test_equals_example(self):
+ self.assertThat([42], Equals([42]))
+
+
+Is
+~~~
+
+Matches if two items are identical. For example::
+
+ def test_is_example(self):
+ foo = object()
+ self.assertThat(foo, Is(foo))
+
+
+IsInstance
+~~~~~~~~~~
+
+Adapts isinstance() to use as a matcher. For example::
+
+ def test_isinstance_example(self):
+ class MyClass:pass
+ self.assertThat(MyClass(), IsInstance(MyClass))
+ self.assertThat(MyClass(), IsInstance(MyClass, str))
+
+
+The raises helper
+~~~~~~~~~~~~~~~~~
+
+Matches if a callable raises a particular type of exception. For example::
+
+ def test_raises_example(self):
+ self.assertThat(lambda: 1/0, raises(ZeroDivisionError))
+
+This is actually a convenience function that combines two other matchers:
+Raises_ and MatchesException_.
+
+
+DocTestMatches
+~~~~~~~~~~~~~~
+
+Matches a string as if it were the output of a doctest_ example. Very useful
+for making assertions about large chunks of text. For example::
+
+ import doctest
+
+ def test_doctest_example(self):
+ output = "Colorless green ideas"
+ self.assertThat(
+ output,
+ DocTestMatches("Colorless ... ideas", doctest.ELLIPSIS))
+
+We highly recommend using the following flags::
+
+ doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE | doctest.REPORT_NDIFF
+
+
+GreaterThan
+~~~~~~~~~~~
+
+Matches if the given thing is greater than the thing in the matcher. For
+example::
+
+ def test_greater_than_example(self):
+ self.assertThat(3, GreaterThan(2))
+
+
+LessThan
+~~~~~~~~
+
+Matches if the given thing is less than the thing in the matcher. For
+example::
+
+ def test_less_than_example(self):
+ self.assertThat(2, LessThan(3))
+
+
+StartsWith, EndsWith
+~~~~~~~~~~~~~~~~~~~~
+
+These matchers check to see if a string starts with or ends with a particular
+substring. For example::
+
+ def test_starts_and_ends_with_example(self):
+ self.assertThat('underground', StartsWith('und'))
+ self.assertThat('underground', EndsWith('und'))
+
+
+Contains
+~~~~~~~~
+
+This matcher checks to see if the given thing contains the thing in the
+matcher. For example::
+
+ def test_contains_example(self):
+ self.assertThat('abc', Contains('b'))
+
+
+MatchesException
+~~~~~~~~~~~~~~~~
+
+Matches an exc_info tuple if the exception is of the correct type. For
+example::
+
+ def test_matches_exception_example(self):
+ try:
+ raise RuntimeError('foo')
+ except RuntimeError:
+ exc_info = sys.exc_info()
+ self.assertThat(exc_info, MatchesException(RuntimeError))
+ self.assertThat(exc_info, MatchesException(RuntimeError('bar'))
+
+Most of the time, you will want to uses `The raises helper`_ instead.
+
+
+NotEquals
+~~~~~~~~~
+
+Matches if something is not equal to something else. Note that this is subtly
+different to ``Not(Equals(x))``. ``NotEquals(x)`` will match if ``y != x``,
+``Not(Equals(x))`` will match if ``not y == x``.
+
+You only need to worry about this distinction if you are testing code that
+relies on badly written overloaded equality operators.
+
+
+KeysEqual
+~~~~~~~~~
+
+Matches if the keys of one dict are equal to the keys of another dict. For
+example::
+
+ def test_keys_equal(self):
+ x = {'a': 1, 'b': 2}
+ y = {'a': 2, 'b': 3}
+ self.assertThat(a, KeysEqual(b))
+
+
+MatchesRegex
+~~~~~~~~~~~~
+
+Matches a string against a regular expression, which is a wonderful thing to
+be able to do, if you think about it::
+
+ def test_matches_regex_example(self):
+ self.assertThat('foo', MatchesRegex('fo+'))
+
+
+Combining matchers
+------------------
+
+One great thing about matchers is that you can readily combine existing
+matchers to get variations on their behaviour or to quickly build more complex
+assertions.
+
+Below are a few of the combining matchers that come with testtools.
+
+
+Not
+~~~
+
+Negates another matcher. For example::
+
+ def test_not_example(self):
+ self.assertThat([42], Not(Equals("potato")))
+ self.assertThat([42], Not(Is([42])))
+
+If you find yourself using ``Not`` frequently, you may wish to create a custom
+matcher for it. For example::
+
+ IsNot = lambda x: Not(Is(x))
+
+ def test_not_example_2(self):
+ self.assertThat([42], IsNot([42]))
+
+
+Annotate
+~~~~~~~~
+
+Used to add custom notes to a matcher. For example::
+
+ def test_annotate_example(self):
+ result = 43
+ self.assertThat(
+ result, Annotate("Not the answer to the Question!", Equals(42))
+
+Since the annotation is only ever displayed when there is a mismatch
+(e.g. when ``result`` does not equal 42), it's a good idea to phrase the note
+negatively, so that it describes what a mismatch actually means.
+
+As with Not_, you may wish to create a custom matcher that describes a
+common operation. For example::
+
+ PoliticallyEquals = lambda x: Annotate("Death to the aristos!", Equals(x))
+
+ def test_annotate_example_2(self):
+ self.assertThat("orange", PoliticallyEquals("yellow"))
+
+You can have assertThat perform the annotation for you as a convenience::
+
+ def test_annotate_example_3(self):
+ self.assertThat("orange", Equals("yellow"), "Death to the aristos!")
+
+
+AfterPreprocessing
+~~~~~~~~~~~~~~~~~~
+
+Used to make a matcher that applies a function to the matched object before
+matching. This can be used to aid in creating trivial matchers as functions, for
+example::
+
+ def test_after_preprocessing_example(self):
+ def HasFileContent(content):
+ def _read(path):
+ return open(path).read()
+ return AfterPreprocessing(_read, Equals(content))
+ self.assertThat('/tmp/foo.txt', PathHasFileContent("Hello world!"))
+
+
+MatchesAll
+~~~~~~~~~~
+
+Combines many matchers to make a new matcher. The new matcher will only match
+things that match every single one of the component matchers.
+
+It's much easier to understand in Python than in English::
+
+ def test_matches_all_example(self):
+ has_und_at_both_ends = MatchesAll(StartsWith("und"), EndsWith("und"))
+ # This will succeed.
+ self.assertThat("underground", has_und_at_both_ends)
+ # This will fail.
+ self.assertThat("found", has_und_at_both_ends)
+ # So will this.
+ self.assertThat("undead", has_und_at_both_ends)
+
+At this point some people ask themselves, "why bother doing this at all? why
+not just have two separate assertions?". It's a good question.
+
+The first reason is that when a ``MatchesAll`` gets a mismatch, the error will
+include information about all of the bits that mismatched. When you have two
+separate assertions, as below::
+
+ def test_two_separate_assertions(self):
+ self.assertThat("foo", StartsWith("und"))
+ self.assertThat("foo", EndsWith("und"))
+
+Then you get absolutely no information from the second assertion if the first
+assertion fails. Tests are largely there to help you debug code, so having
+more information in error messages is a big help.
+
+The second reason is that it is sometimes useful to give a name to a set of
+matchers. ``has_und_at_both_ends`` is a bit contrived, of course, but it is
+clear.
+
+
+MatchesAny
+~~~~~~~~~~
+
+Like MatchesAll_, ``MatchesAny`` combines many matchers to make a new
+matcher. The difference is that the new matchers will match a thing if it
+matches *any* of the component matchers.
+
+For example::
+
+ def test_matches_any_example(self):
+ self.assertThat(42, MatchesAny(Equals(5), Not(Equals(6))))
+
+
+AllMatch
+~~~~~~~~
+
+Matches many values against a single matcher. Can be used to make sure that
+many things all meet the same condition::
+
+ def test_all_match_example(self):
+ self.assertThat([2, 3, 5, 7], AllMatch(LessThan(10)))
+
+If the match fails, then all of the values that fail to match will be included
+in the error message.
+
+In some ways, this is the converse of MatchesAll_.
+
+
+MatchesListwise
+~~~~~~~~~~~~~~~
+
+Where ``MatchesAny`` and ``MatchesAll`` combine many matchers to match a
+single value, ``MatchesListwise`` combines many matches to match many values.
+
+For example::
+
+ def test_matches_listwise_example(self):
+ self.assertThat(
+ [1, 2, 3], MatchesListwise(map(Equals, [1, 2, 3])))
+
+This is useful for writing custom, domain-specific matchers.
+
+
+MatchesSetwise
+~~~~~~~~~~~~~~
+
+Combines many matchers to match many values, without regard to their order.
+
+Here's an example::
+
+ def test_matches_setwise_example(self):
+ self.assertThat(
+ [1, 2, 3], MatchesSetwise(Equals(2), Equals(3), Equals(1)))
+
+Much like ``MatchesListwise``, best used for writing custom, domain-specific
+matchers.
+
+
+MatchesStructure
+~~~~~~~~~~~~~~~~
+
+Creates a matcher that matches certain attributes of an object against a
+pre-defined set of matchers.
+
+It's much easier to understand in Python than in English::
+
+ def test_matches_structure_example(self):
+ foo = Foo()
+ foo.a = 1
+ foo.b = 2
+ matcher = MatchesStructure(a=Equals(1), b=Equals(2))
+ self.assertThat(foo, matcher)
+
+Since all of the matchers used were ``Equals``, we could also write this using
+the ``byEquality`` helper::
+
+ def test_matches_structure_example(self):
+ foo = Foo()
+ foo.a = 1
+ foo.b = 2
+ matcher = MatchesStructure.byEquality(a=1, b=2)
+ self.assertThat(foo, matcher)
+
+``MatchesStructure.fromExample`` takes an object and a list of attributes and
+creates a ``MatchesStructure`` matcher where each attribute of the matched
+object must equal each attribute of the example object. For example::
+
+ matcher = MatchesStructure.fromExample(foo, 'a', 'b')
+
+is exactly equivalent to ``matcher`` in the previous example.
+
+
+Raises
+~~~~~~
+
+Takes whatever the callable raises as an exc_info tuple and matches it against
+whatever matcher it was given. For example, if you want to assert that a
+callable raises an exception of a given type::
+
+ def test_raises_example(self):
+ self.assertThat(
+ lambda: 1/0, Raises(MatchesException(ZeroDivisionError)))
+
+Although note that this could also be written as::
+
+ def test_raises_example_convenient(self):
+ self.assertThat(lambda: 1/0, raises(ZeroDivisionError))
+
+See also MatchesException_ and `the raises helper`_
+
+
+Writing your own matchers
+-------------------------
+
+Combining matchers is fun and can get you a very long way indeed, but
+sometimes you will have to write your own. Here's how.
+
+You need to make two closely-linked objects: a ``Matcher`` and a
+``Mismatch``. The ``Matcher`` knows how to actually make the comparison, and
+the ``Mismatch`` knows how to describe a failure to match.
+
+Here's an example matcher::
+
+ class IsDivisibleBy(object):
+ """Match if a number is divisible by another number."""
+ def __init__(self, divider):
+ self.divider = divider
+ def __str__(self):
+ return 'IsDivisibleBy(%s)' % (self.divider,)
+ def match(self, actual):
+ remainder = actual % self.divider
+ if remainder != 0:
+ return IsDivisibleByMismatch(actual, self.divider, remainder)
+ else:
+ return None
+
+The matcher has a constructor that takes parameters that describe what you
+actually *expect*, in this case a number that other numbers ought to be
+divisible by. It has a ``__str__`` method, the result of which is displayed
+on failure by ``assertThat`` and a ``match`` method that does the actual
+matching.
+
+``match`` takes something to match against, here ``actual``, and decides
+whether or not it matches. If it does match, then ``match`` must return
+``None``. If it does *not* match, then ``match`` must return a ``Mismatch``
+object. ``assertThat`` will call ``match`` and then fail the test if it
+returns a non-None value. For example::
+
+ def test_is_divisible_by_example(self):
+ # This succeeds, since IsDivisibleBy(5).match(10) returns None.
+ self.assertThat(10, IsDivisbleBy(5))
+ # This fails, since IsDivisibleBy(7).match(10) returns a mismatch.
+ self.assertThat(10, IsDivisbleBy(7))
+
+The mismatch is responsible for what sort of error message the failing test
+generates. Here's an example mismatch::
+
+ class IsDivisibleByMismatch(object):
+ def __init__(self, number, divider, remainder):
+ self.number = number
+ self.divider = divider
+ self.remainder = remainder
+
+ def describe(self):
+ return "%s is not divisible by %s, %s remains" % (
+ self.number, self.divider, self.remainder)
+
+ def get_details(self):
+ return {}
+
+The mismatch takes information about the mismatch, and provides a ``describe``
+method that assembles all of that into a nice error message for end users.
+You can use the ``get_details`` method to provide extra, arbitrary data with
+the mismatch (e.g. the contents of a log file). Most of the time it's fine to
+just return an empty dict. You can read more about Details_ elsewhere in this
+document.
+
+Sometimes you don't need to create a custom mismatch class. In particular, if
+you don't care *when* the description is calculated, then you can just do that
+in the Matcher itself like this::
+
+ def match(self, actual):
+ remainder = actual % self.divider
+ if remainder != 0:
+ return Mismatch(
+ "%s is not divisible by %s, %s remains" % (
+ actual, self.divider, remainder))
+ else:
+ return None
+
+
+Details
+=======
+
+As we may have mentioned once or twice already, one of the great benefits of
+automated tests is that they help find, isolate and debug errors in your
+system.
+
+Frequently however, the information provided by a mere assertion failure is
+not enough. It's often useful to have other information: the contents of log
+files; what queries were run; benchmark timing information; what state certain
+subsystem components are in and so forth.
+
+testtools calls all of these things "details" and provides a single, powerful
+mechanism for including this information in your test run.
+
+Here's an example of how to add them::
+
+ from testtools import TestCase
+ from testtools.content import text_content
+
+ class TestSomething(TestCase):
+
+ def test_thingy(self):
+ self.addDetail('arbitrary-color-name', text_content("blue"))
+ 1 / 0 # Gratuitous error!
+
+A detail an arbitrary piece of content given a name that's unique within the
+test. Here the name is ``arbitrary-color-name`` and the content is
+``text_content("blue")``. The name can be any text string, and the content
+can be any ``testtools.content.Content`` object.
+
+When the test runs, testtools will show you something like this::
+
+ ======================================================================
+ ERROR: exampletest.TestSomething.test_thingy
+ ----------------------------------------------------------------------
+ arbitrary-color-name: {{{blue}}}
+
+ Traceback (most recent call last):
+ File "exampletest.py", line 8, in test_thingy
+ 1 / 0 # Gratuitous error!
+ ZeroDivisionError: integer division or modulo by zero
+ ------------
+ Ran 1 test in 0.030s
+
+As you can see, the detail is included as an attachment, here saying
+that our arbitrary-color-name is "blue".
+
+
+Content
+-------
+
+For the actual content of details, testtools uses its own MIME-based Content
+object. This allows you to attach any information that you could possibly
+conceive of to a test, and allows testtools to use or serialize that
+information.
+
+The basic ``testtools.content.Content`` object is constructed from a
+``testtools.content.ContentType`` and a nullary callable that must return an
+iterator of chunks of bytes that the content is made from.
+
+So, to make a Content object that is just a simple string of text, you can
+do::
+
+ from testtools.content import Content
+ from testtools.content_type import ContentType
+
+ text = Content(ContentType('text', 'plain'), lambda: ["some text"])
+
+Because adding small bits of text content is very common, there's also a
+convenience method::
+
+ text = text_content("some text")
+
+To make content out of an image stored on disk, you could do something like::
+
+ image = Content(ContentType('image', 'png'), lambda: open('foo.png').read())
+
+Or you could use the convenience function::
+
+ image = content_from_file('foo.png', ContentType('image', 'png'))
+
+The ``lambda`` helps make sure that the file is opened and the actual bytes
+read only when they are needed – by default, when the test is finished. This
+means that tests can construct and add Content objects freely without worrying
+too much about how they affect run time.
+
+
+A realistic example
+-------------------
+
+A very common use of details is to add a log file to failing tests. Say your
+project has a server represented by a class ``SomeServer`` that you can start
+up and shut down in tests, but runs in another process. You want to test
+interaction with that server, and whenever the interaction fails, you want to
+see the client-side error *and* the logs from the server-side. Here's how you
+might do it::
+
+ from testtools import TestCase
+ from testtools.content import attach_file, Content
+ from testtools.content_type import UTF8_TEXT
+
+ from myproject import SomeServer
+
+ class SomeTestCase(TestCase):
+
+ def setUp(self):
+ super(SomeTestCase, self).setUp()
+ self.server = SomeServer()
+ self.server.start_up()
+ self.addCleanup(self.server.shut_down)
+ self.addCleanup(attach_file, self.server.logfile, self)
+
+ def attach_log_file(self):
+ self.addDetail(
+ 'log-file',
+ Content(UTF8_TEXT,
+ lambda: open(self.server.logfile, 'r').readlines()))
+
+ def test_a_thing(self):
+ self.assertEqual("cool", self.server.temperature)
+
+This test will attach the log file of ``SomeServer`` to each test that is
+run. testtools will only display the log file for failing tests, so it's not
+such a big deal.
+
+If the act of adding at detail is expensive, you might want to use
+addOnException_ so that you only do it when a test actually raises an
+exception.
+
+
+Controlling test execution
+==========================
+
+.. _addCleanup:
+
+addCleanup
+----------
+
+``TestCase.addCleanup`` is a robust way to arrange for a clean up function to
+be called before ``tearDown``. This is a powerful and simple alternative to
+putting clean up logic in a try/finally block or ``tearDown`` method. For
+example::
+
+ def test_foo(self):
+ foo.lock()
+ self.addCleanup(foo.unlock)
+ ...
+
+This is particularly useful if you have some sort of factory in your test::
+
+ def make_locked_foo(self):
+ foo = Foo()
+ foo.lock()
+ self.addCleanup(foo.unlock)
+ return foo
+
+ def test_frotz_a_foo(self):
+ foo = self.make_locked_foo()
+ foo.frotz()
+ self.assertEqual(foo.frotz_count, 1)
+
+Any extra arguments or keyword arguments passed to ``addCleanup`` are passed
+to the callable at cleanup time.
+
+Cleanups can also report multiple errors, if appropriate by wrapping them in
+a ``testtools.MultipleExceptions`` object::
+
+ raise MultipleExceptions(exc_info1, exc_info2)
+
+
+Fixtures
+--------
+
+Tests often depend on a system being set up in a certain way, or having
+certain resources available to them. Perhaps a test needs a connection to the
+database or access to a running external server.
+
+One common way of doing this is to do::
+
+ class SomeTest(TestCase):
+ def setUp(self):
+ super(SomeTest, self).setUp()
+ self.server = Server()
+ self.server.setUp()
+ self.addCleanup(self.server.tearDown)
+
+testtools provides a more convenient, declarative way to do the same thing::
+
+ class SomeTest(TestCase):
+ def setUp(self):
+ super(SomeTest, self).setUp()
+ self.server = self.useFixture(Server())
+
+``useFixture(fixture)`` calls ``setUp`` on the fixture, schedules a clean up
+to clean it up, and schedules a clean up to attach all details_ held by the
+fixture to the test case. The fixture object must meet the
+``fixtures.Fixture`` protocol (version 0.3.4 or newer, see fixtures_).
+
+If you have anything beyond the most simple test set up, we recommend that
+you put this set up into a ``Fixture`` class. Once there, the fixture can be
+easily re-used by other tests and can be combined with other fixtures to make
+more complex resources.
+
+
+Skipping tests
+--------------
+
+Many reasons exist to skip a test: a dependency might be missing; a test might
+be too expensive and thus should not berun while on battery power; or perhaps
+the test is testing an incomplete feature.
+
+``TestCase.skipTest`` is a simple way to have a test stop running and be
+reported as a skipped test, rather than a success, error or failure. For
+example::
+
+ def test_make_symlink(self):
+ symlink = getattr(os, 'symlink', None)
+ if symlink is None:
+ self.skipTest("No symlink support")
+ symlink(whatever, something_else)
+
+Using ``skipTest`` means that you can make decisions about what tests to run
+as late as possible, and close to the actual tests. Without it, you might be
+forced to use convoluted logic during test loading, which is a bit of a mess.
+
+
+Legacy skip support
+~~~~~~~~~~~~~~~~~~~
+
+If you are using this feature when running your test suite with a legacy
+``TestResult`` object that is missing the ``addSkip`` method, then the
+``addError`` method will be invoked instead. If you are using a test result
+from testtools, you do not have to worry about this.
+
+In older versions of testtools, ``skipTest`` was known as ``skip``. Since
+Python 2.7 added ``skipTest`` support, the ``skip`` name is now deprecated.
+No warning is emitted yet – some time in the future we may do so.
+
+
+addOnException
+--------------
+
+Sometimes, you might wish to do something only when a test fails. Perhaps you
+need to run expensive diagnostic routines or some such.
+``TestCase.addOnException`` allows you to easily do just this. For example::
+
+ class SomeTest(TestCase):
+ def setUp(self):
+ super(SomeTest, self).setUp()
+ self.server = self.useFixture(SomeServer())
+ self.addOnException(self.attach_server_diagnostics)
+
+ def attach_server_diagnostics(self, exc_info):
+ self.server.prep_for_diagnostics() # Expensive!
+ self.addDetail('server-diagnostics', self.server.get_diagnostics)
+
+ def test_a_thing(self):
+ self.assertEqual('cheese', 'chalk')
+
+In this example, ``attach_server_diagnostics`` will only be called when a test
+fails. It is given the exc_info tuple of the error raised by the test, just
+in case it is needed.
+
+
+Twisted support
+---------------
+
+testtools provides *highly experimental* support for running Twisted tests –
+tests that return a Deferred_ and rely on the Twisted reactor. You should not
+use this feature right now. We reserve the right to change the API and
+behaviour without telling you first.
+
+However, if you are going to, here's how you do it::
+
+ from testtools import TestCase
+ from testtools.deferredruntest import AsynchronousDeferredRunTest
+
+ class MyTwistedTests(TestCase):
+
+ run_tests_with = AsynchronousDeferredRunTest
+
+ def test_foo(self):
+ # ...
+ return d
+
+In particular, note that you do *not* have to use a special base ``TestCase``
+in order to run Twisted tests.
+
+You can also run individual tests within a test case class using the Twisted
+test runner::
+
+ class MyTestsSomeOfWhichAreTwisted(TestCase):
+
+ def test_normal(self):
+ pass
+
+ @run_test_with(AsynchronousDeferredRunTest)
+ def test_twisted(self):
+ # ...
+ return d
+
+Here are some tips for converting your Trial tests into testtools tests.
+
+* Use the ``AsynchronousDeferredRunTest`` runner
+* Make sure to upcall to ``setUp`` and ``tearDown``
+* Don't use ``setUpClass`` or ``tearDownClass``
+* Don't expect setting .todo, .timeout or .skip attributes to do anything
+* ``flushLoggedErrors`` is ``testtools.deferredruntest.flush_logged_errors``
+* ``assertFailure`` is ``testtools.deferredruntest.assert_fails_with``
+* Trial spins the reactor a couple of times before cleaning it up,
+ ``AsynchronousDeferredRunTest`` does not. If you rely on this behavior, use
+ ``AsynchronousDeferredRunTestForBrokenTwisted``.
+
+
+Test helpers
+============
+
+testtools comes with a few little things that make it a little bit easier to
+write tests.
+
+
+TestCase.patch
+--------------
+
+``patch`` is a convenient way to monkey-patch a Python object for the duration
+of your test. It's especially useful for testing legacy code. e.g.::
+
+ def test_foo(self):
+ my_stream = StringIO()
+ self.patch(sys, 'stderr', my_stream)
+ run_some_code_that_prints_to_stderr()
+ self.assertEqual('', my_stream.getvalue())
+
+The call to ``patch`` above masks ``sys.stderr`` with ``my_stream`` so that
+anything printed to stderr will be captured in a StringIO variable that can be
+actually tested. Once the test is done, the real ``sys.stderr`` is restored to
+its rightful place.
+
+
+Creation methods
+----------------
+
+Often when writing unit tests, you want to create an object that is a
+completely normal instance of its type. You don't want there to be anything
+special about its properties, because you are testing generic behaviour rather
+than specific conditions.
+
+A lot of the time, test authors do this by making up silly strings and numbers
+and passing them to constructors (e.g. 42, 'foo', "bar" etc), and that's
+fine. However, sometimes it's useful to be able to create arbitrary objects
+at will, without having to make up silly sample data.
+
+To help with this, ``testtools.TestCase`` implements creation methods called
+``getUniqueString`` and ``getUniqueInteger``. They return strings and
+integers that are unique within the context of the test that can be used to
+assemble more complex objects. Here's a basic example where
+``getUniqueString`` is used instead of saying "foo" or "bar" or whatever::
+
+ class SomeTest(TestCase):
+
+ def test_full_name(self):
+ first_name = self.getUniqueString()
+ last_name = self.getUniqueString()
+ p = Person(first_name, last_name)
+ self.assertEqual(p.full_name, "%s %s" % (first_name, last_name))
+
+
+And here's how it could be used to make a complicated test::
+
+ class TestCoupleLogic(TestCase):
+
+ def make_arbitrary_person(self):
+ return Person(self.getUniqueString(), self.getUniqueString())
+
+ def test_get_invitation(self):
+ a = self.make_arbitrary_person()
+ b = self.make_arbitrary_person()
+ couple = Couple(a, b)
+ event_name = self.getUniqueString()
+ invitation = couple.get_invitation(event_name)
+ self.assertEqual(
+ invitation,
+ "We invite %s and %s to %s" % (
+ a.full_name, b.full_name, event_name))
+
+Essentially, creation methods like these are a way of reducing the number of
+assumptions in your tests and communicating to test readers that the exact
+details of certain variables don't actually matter.
+
+See pages 419-423 of `xUnit Test Patterns`_ by Gerard Meszaros for a detailed
+discussion of creation methods.
+
+
+General helpers
+===============
+
+Conditional imports
+-------------------
+
+Lots of the time we would like to conditionally import modules. testtools
+needs to do this itself, and graciously extends the ability to its users.
+
+Instead of::
+
+ try:
+ from twisted.internet import defer
+ except ImportError:
+ defer = None
+
+You can do::
+
+ defer = try_import('twisted.internet.defer')
+
+
+Instead of::
+
+ try:
+ from StringIO import StringIO
+ except ImportError:
+ from io import StringIO
+
+You can do::
+
+ StringIO = try_imports(['StringIO.StringIO', 'io.StringIO'])
+
+
+Safe attribute testing
+----------------------
+
+``hasattr`` is broken_ on many versions of Python. testtools provides
+``safe_hasattr``, which can be used to safely test whether an object has a
+particular attribute.
+
+
+.. _testrepository: https://launchpad.net/testrepository
+.. _Trial: http://twistedmatrix.com/documents/current/core/howto/testing.html
+.. _nose: http://somethingaboutorange.com/mrl/projects/nose/
+.. _unittest2: http://pypi.python.org/pypi/unittest2
+.. _zope.testrunner: http://pypi.python.org/pypi/zope.testrunner/
+.. _xUnit test patterns: http://xunitpatterns.com/
+.. _fixtures: http://pypi.python.org/pypi/fixtures
+.. _unittest: http://docs.python.org/library/unittest.html
+.. _doctest: http://docs.python.org/library/doctest.html
+.. _Deferred: http://twistedmatrix.com/documents/current/core/howto/defer.html
+.. _discover: http://pypi.python.org/pypi/discover
+.. _`testtools API docs`: http://mumak.net/testtools/apidocs/
+.. _Distutils: http://docs.python.org/library/distutils.html
+.. _`setup configuration`: http://docs.python.org/distutils/configfile.html
+.. _broken: http://chipaca.com/post/3210673069/hasattr-17-less-harmful
diff --git a/lib/testtools/doc/hacking.rst b/lib/testtools/doc/hacking.rst
new file mode 100644
index 0000000000..b9f5ff22c6
--- /dev/null
+++ b/lib/testtools/doc/hacking.rst
@@ -0,0 +1,154 @@
+=========================
+Contributing to testtools
+=========================
+
+Coding style
+------------
+
+In general, follow `PEP 8`_ except where consistency with the standard
+library's unittest_ module would suggest otherwise.
+
+testtools supports Python 2.4 and later, including Python 3, so avoid any
+2.5-only features like the ``with`` statement.
+
+
+Copyright assignment
+--------------------
+
+Part of testtools raison d'etre is to provide Python with improvements to the
+testing code it ships. For that reason we require all contributions (that are
+non-trivial) to meet one of the following rules:
+
+* be inapplicable for inclusion in Python.
+* be able to be included in Python without further contact with the contributor.
+* be copyright assigned to Jonathan M. Lange.
+
+Please pick one of these and specify it when contributing code to testtools.
+
+
+Licensing
+---------
+
+All code that is not copyright assigned to Jonathan M. Lange (see Copyright
+Assignment above) needs to be licensed under the `MIT license`_ that testtools
+uses, so that testtools can ship it.
+
+
+Testing
+-------
+
+Please write tests for every feature. This project ought to be a model
+example of well-tested Python code!
+
+Take particular care to make sure the *intent* of each test is clear.
+
+You can run tests with ``make check``.
+
+By default, testtools hides many levels of its own stack when running tests.
+This is for the convenience of users, who do not care about how, say, assert
+methods are implemented. However, when writing tests for testtools itself, it
+is often useful to see all levels of the stack. To do this, add
+``run_tests_with = FullStackRunTest`` to the top of a test's class definition.
+
+
+Documentation
+-------------
+
+Documents are written using the Sphinx_ variant of reStructuredText_. All
+public methods, functions, classes and modules must have API documentation.
+When changing code, be sure to check the API documentation to see if it could
+be improved. Before submitting changes to trunk, look over them and see if
+the manuals ought to be updated.
+
+
+Source layout
+-------------
+
+The top-level directory contains the ``testtools/`` package directory, and
+miscellaneous files like ``README`` and ``setup.py``.
+
+The ``testtools/`` directory is the Python package itself. It is separated
+into submodules for internal clarity, but all public APIs should be “promoted”
+into the top-level package by importing them in ``testtools/__init__.py``.
+Users of testtools should never import a submodule in order to use a stable
+API. Unstable APIs like ``testtools.matchers`` and
+``testtools.deferredruntest`` should be exported as submodules.
+
+Tests belong in ``testtools/tests/``.
+
+
+Committing to trunk
+-------------------
+
+Testtools is maintained using bzr, with its trunk at lp:testtools. This gives
+every contributor the ability to commit their work to their own branches.
+However permission must be granted to allow contributors to commit to the trunk
+branch.
+
+Commit access to trunk is obtained by joining the testtools-committers
+Launchpad team. Membership in this team is contingent on obeying the testtools
+contribution policy, see `Copyright Assignment`_ above.
+
+
+Code Review
+-----------
+
+All code must be reviewed before landing on trunk. The process is to create a
+branch in launchpad, and submit it for merging to lp:testtools. It will then
+be reviewed before it can be merged to trunk. It will be reviewed by someone:
+
+* not the author
+* a committer (member of the `~testtools-committers`_ team)
+
+As a special exception, while the testtools committers team is small and prone
+to blocking, a merge request from a committer that has not been reviewed after
+24 hours may be merged by that committer. When the team is larger this policy
+will be revisited.
+
+Code reviewers should look for the quality of what is being submitted,
+including conformance with this HACKING file.
+
+Changes which all users should be made aware of should be documented in NEWS.
+
+
+NEWS management
+---------------
+
+The file NEWS is structured as a sorted list of releases. Each release can have
+a free form description and more or more sections with bullet point items.
+Sections in use today are 'Improvements' and 'Changes'. To ease merging between
+branches, the bullet points are kept alphabetically sorted. The release NEXT is
+permanently present at the top of the list.
+
+
+Release tasks
+-------------
+
+#. Choose a version number, say X.Y.Z
+#. Branch from trunk to testtools-X.Y.Z
+#. In testtools-X.Y.Z, ensure __init__ has version ``(X, Y, Z, 'final', 0)``
+#. Replace NEXT in NEWS with the version number X.Y.Z, adjusting the reST.
+#. Possibly write a blurb into NEWS.
+#. Replace any additional references to NEXT with the version being
+ released. (There should be none other than the ones in these release tasks
+ which should not be replaced).
+#. Commit the changes.
+#. Tag the release, bzr tag testtools-X.Y.Z
+#. Run 'make release', this:
+ #. Creates a source distribution and uploads to PyPI
+ #. Ensures all Fix Committed bugs are in the release milestone
+ #. Makes a release on Launchpad and uploads the tarball
+ #. Marks all the Fix Committed bugs as Fix Released
+ #. Creates a new milestone
+#. Merge the release branch testtools-X.Y.Z into trunk. Before the commit,
+ add a NEXT heading to the top of NEWS and bump the version in __init__.py.
+ Push trunk to Launchpad
+#. If a new series has been created (e.g. 0.10.0), make the series on Launchpad.
+
+.. _PEP 8: http://www.python.org/dev/peps/pep-0008/
+.. _unittest: http://docs.python.org/library/unittest.html
+.. _~testtools-dev: https://launchpad.net/~testtools-dev
+.. _MIT license: http://www.opensource.org/licenses/mit-license.php
+.. _Sphinx: http://sphinx.pocoo.org/
+.. _restructuredtext: http://docutils.sourceforge.net/rst.html
+
diff --git a/lib/testtools/doc/index.rst b/lib/testtools/doc/index.rst
new file mode 100644
index 0000000000..4687cebb62
--- /dev/null
+++ b/lib/testtools/doc/index.rst
@@ -0,0 +1,33 @@
+.. testtools documentation master file, created by
+ sphinx-quickstart on Sun Nov 28 13:45:40 2010.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+testtools: tasteful testing for Python
+======================================
+
+testtools is a set of extensions to the Python standard library's unit testing
+framework. These extensions have been derived from many years of experience
+with unit testing in Python and come from many different sources. testtools
+also ports recent unittest changes all the way back to Python 2.4.
+
+
+Contents:
+
+.. toctree::
+ :maxdepth: 1
+
+ overview
+ for-test-authors
+ for-framework-folk
+ hacking
+ Changes to testtools <news>
+ API reference documentation <http://mumak.net/testtools/apidocs/>
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+
diff --git a/lib/testtools/doc/make.bat b/lib/testtools/doc/make.bat
new file mode 100644
index 0000000000..f8c1fd520a
--- /dev/null
+++ b/lib/testtools/doc/make.bat
@@ -0,0 +1,113 @@
+@ECHO OFF
+
+REM Command file for Sphinx documentation
+
+set SPHINXBUILD=sphinx-build
+set BUILDDIR=_build
+set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
+if NOT "%PAPER%" == "" (
+ set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
+)
+
+if "%1" == "" goto help
+
+if "%1" == "help" (
+ :help
+ echo.Please use `make ^<target^>` where ^<target^> is one of
+ echo. html to make standalone HTML files
+ echo. dirhtml to make HTML files named index.html in directories
+ echo. pickle to make pickle files
+ echo. json to make JSON files
+ echo. htmlhelp to make HTML files and a HTML help project
+ echo. qthelp to make HTML files and a qthelp project
+ echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
+ echo. changes to make an overview over all changed/added/deprecated items
+ echo. linkcheck to check all external links for integrity
+ echo. doctest to run all doctests embedded in the documentation if enabled
+ goto end
+)
+
+if "%1" == "clean" (
+ for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
+ del /q /s %BUILDDIR%\*
+ goto end
+)
+
+if "%1" == "html" (
+ %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/html.
+ goto end
+)
+
+if "%1" == "dirhtml" (
+ %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
+ goto end
+)
+
+if "%1" == "pickle" (
+ %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
+ echo.
+ echo.Build finished; now you can process the pickle files.
+ goto end
+)
+
+if "%1" == "json" (
+ %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
+ echo.
+ echo.Build finished; now you can process the JSON files.
+ goto end
+)
+
+if "%1" == "htmlhelp" (
+ %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
+ echo.
+ echo.Build finished; now you can run HTML Help Workshop with the ^
+.hhp project file in %BUILDDIR%/htmlhelp.
+ goto end
+)
+
+if "%1" == "qthelp" (
+ %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
+ echo.
+ echo.Build finished; now you can run "qcollectiongenerator" with the ^
+.qhcp project file in %BUILDDIR%/qthelp, like this:
+ echo.^> qcollectiongenerator %BUILDDIR%\qthelp\testtools.qhcp
+ echo.To view the help file:
+ echo.^> assistant -collectionFile %BUILDDIR%\qthelp\testtools.ghc
+ goto end
+)
+
+if "%1" == "latex" (
+ %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
+ echo.
+ echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
+ goto end
+)
+
+if "%1" == "changes" (
+ %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
+ echo.
+ echo.The overview file is in %BUILDDIR%/changes.
+ goto end
+)
+
+if "%1" == "linkcheck" (
+ %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
+ echo.
+ echo.Link check complete; look for any errors in the above output ^
+or in %BUILDDIR%/linkcheck/output.txt.
+ goto end
+)
+
+if "%1" == "doctest" (
+ %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
+ echo.
+ echo.Testing of doctests in the sources finished, look at the ^
+results in %BUILDDIR%/doctest/output.txt.
+ goto end
+)
+
+:end
diff --git a/lib/testtools/doc/overview.rst b/lib/testtools/doc/overview.rst
new file mode 100644
index 0000000000..e43265fd1e
--- /dev/null
+++ b/lib/testtools/doc/overview.rst
@@ -0,0 +1,96 @@
+======================================
+testtools: tasteful testing for Python
+======================================
+
+testtools is a set of extensions to the Python standard library's unit testing
+framework. These extensions have been derived from many years of experience
+with unit testing in Python and come from many different sources. testtools
+also ports recent unittest changes all the way back to Python 2.4.
+
+What better way to start than with a contrived code snippet?::
+
+ from testtools import TestCase
+ from testtools.content import Content
+ from testtools.content_type import UTF8_TEXT
+ from testtools.matchers import Equals
+
+ from myproject import SillySquareServer
+
+ class TestSillySquareServer(TestCase):
+
+ def setUp(self):
+ super(TestSillySquare, self).setUp()
+ self.server = self.useFixture(SillySquareServer())
+ self.addCleanup(self.attach_log_file)
+
+ def attach_log_file(self):
+ self.addDetail(
+ 'log-file',
+ Content(UTF8_TEXT
+ lambda: open(self.server.logfile, 'r').readlines()))
+
+ def test_server_is_cool(self):
+ self.assertThat(self.server.temperature, Equals("cool"))
+
+ def test_square(self):
+ self.assertThat(self.server.silly_square_of(7), Equals(49))
+
+
+Why use testtools?
+==================
+
+Better assertion methods
+------------------------
+
+The standard assertion methods that come with unittest aren't as helpful as
+they could be, and there aren't quite enough of them. testtools adds
+``assertIn``, ``assertIs``, ``assertIsInstance`` and their negatives.
+
+
+Matchers: better than assertion methods
+---------------------------------------
+
+Of course, in any serious project you want to be able to have assertions that
+are specific to that project and the particular problem that it is addressing.
+Rather than forcing you to define your own assertion methods and maintain your
+own inheritance hierarchy of ``TestCase`` classes, testtools lets you write
+your own "matchers", custom predicates that can be plugged into a unit test::
+
+ def test_response_has_bold(self):
+ # The response has bold text.
+ response = self.server.getResponse()
+ self.assertThat(response, HTMLContains(Tag('bold', 'b')))
+
+
+More debugging info, when you need it
+--------------------------------------
+
+testtools makes it easy to add arbitrary data to your test result. If you
+want to know what's in a log file when a test fails, or what the load was on
+the computer when a test started, or what files were open, you can add that
+information with ``TestCase.addDetail``, and it will appear in the test
+results if that test fails.
+
+
+Extend unittest, but stay compatible and re-usable
+--------------------------------------------------
+
+testtools goes to great lengths to allow serious test authors and test
+*framework* authors to do whatever they like with their tests and their
+extensions while staying compatible with the standard library's unittest.
+
+testtools has completely parametrized how exceptions raised in tests are
+mapped to ``TestResult`` methods and how tests are actually executed (ever
+wanted ``tearDown`` to be called regardless of whether ``setUp`` succeeds?)
+
+It also provides many simple but handy utilities, like the ability to clone a
+test, a ``MultiTestResult`` object that lets many result objects get the
+results from one test suite, adapters to bring legacy ``TestResult`` objects
+into our new golden age.
+
+
+Cross-Python compatibility
+--------------------------
+
+testtools gives you the very latest in unit testing technology in a way that
+will work with Python 2.4, 2.5, 2.6, 2.7 and 3.1.