matplotlib.testing
#
matplotlib.testing
#
Helper functions for testing.
- matplotlib.testing.subprocess_run_for_testing(command, env=None, timeout=None, stdout=None, stderr=None, check=False, text=True, capture_output=False)[source]#
Create and run a subprocess.
Thin wrapper around
subprocess.run
, intended for testing. Will mark fork() failures on Cygwin as expected failures: not a success, but not indicating a problem with the code either.- Parameters:
- argslist of str
- envdict[str, str]
- timeoutfloat
- stdout, stderr
- checkbool
- textbool
Also called
universal_newlines
in subprocess. I chose this name since the main effect is returning bytes (False
) vs. str (True
), though it also tries to normalize newlines across platforms.- capture_outputbool
Set stdout and stderr to subprocess.PIPE
- Returns:
- procsubprocess.Popen
- Raises:
- pytest.xfail
If platform is Cygwin and subprocess reports a fork() failure.
See also
- matplotlib.testing.subprocess_run_helper(func, *args, timeout, extra_env=None)[source]#
Run a function in a sub-process.
- Parameters:
- funcfunction
The function to be run. It must be in a module that is importable.
- *argsstr
Any additional command line arguments to be passed in the first argument to
subprocess.run
.- extra_envdict[str, str]
Any additional environment variables to be set for the subprocess.
matplotlib.testing.compare
#
Utilities for comparing image results.
- matplotlib.testing.compare.calculate_rms(expected_image, actual_image)[source]#
Calculate the per-pixel errors, then compute the root mean square error.
- matplotlib.testing.compare.comparable_formats()[source]#
Return the list of file formats that
compare_images
can compare on this system.- Returns:
- list of str
E.g.
['png', 'pdf', 'svg', 'eps']
.
- matplotlib.testing.compare.compare_images(expected, actual, tol, in_decorator=False)[source]#
Compare two "image" files checking differences within a tolerance.
The two given filenames may point to files which are convertible to PNG via the
converter
dictionary. The underlying RMS is calculated with thecalculate_rms
function.- Parameters:
- expectedstr
The filename of the expected image.
- actualstr
The filename of the actual image.
- tolfloat
The tolerance (a color value difference, where 255 is the maximal difference). The test fails if the average pixel difference is greater than this value.
- in_decoratorbool
Determines the output format. If called from image_comparison decorator, this should be True. (default=False)
- Returns:
- None or dict or str
Return None if the images are equal within the given tolerance.
If the images differ, the return value depends on in_decorator. If in_decorator is true, a dict with the following entries is returned:
rms: The RMS of the image difference.
expected: The filename of the expected image.
actual: The filename of the actual image.
diff_image: The filename of the difference image.
tol: The comparison tolerance.
Otherwise, a human-readable multi-line string representation of this information is returned.
Examples
img1 = "./baseline/plot.png" img2 = "./output/plot.png" compare_images(img1, img2, 0.001)
matplotlib.testing.decorators
#
- matplotlib.testing.decorators.check_figures_equal(*, extensions=('png', 'pdf', 'svg'), tol=0)[source]#
Decorator for test cases that generate and compare two figures.
The decorated function must take two keyword arguments, fig_test and fig_ref, and draw the test and reference images on them. After the function returns, the figures are saved and compared.
This decorator should be preferred over
image_comparison
when possible in order to keep the size of the test suite from ballooning.- Parameters:
- extensionslist, default: ["png", "pdf", "svg"]
The extensions to test.
- tolfloat
The RMS threshold above which the test is considered failed.
- Raises:
- RuntimeError
If any new figures are created (and not subsequently closed) inside the test function.
Examples
Check that calling
Axes.plot
with a single argument plots it against[0, 1, 2, ...]
:@check_figures_equal() def test_plot(fig_test, fig_ref): fig_test.subplots().plot([1, 3, 5]) fig_ref.subplots().plot([0, 1, 2], [1, 3, 5])
- matplotlib.testing.decorators.image_comparison(baseline_images, extensions=None, tol=0, freetype_version=None, remove_text=False, savefig_kwarg=None, style=('classic', '_classic_test_patch'))[source]#
Compare images generated by the test with those specified in baseline_images, which must correspond, else an
ImageComparisonFailure
exception will be raised.- Parameters:
- baseline_imageslist or None
A list of strings specifying the names of the images generated by calls to
Figure.savefig
.If None, the test function must use the
baseline_images
fixture, either as a parameter or withpytest.mark.usefixtures
. This value is only allowed when using pytest.- extensionsNone or list of str
The list of extensions to test, e.g.
['png', 'pdf']
.If None, defaults to all supported extensions: png, pdf, and svg.
When testing a single extension, it can be directly included in the names passed to baseline_images. In that case, extensions must not be set.
In order to keep the size of the test suite from ballooning, we only include the
svg
orpdf
outputs if the test is explicitly exercising a feature dependent on that backend (see also thecheck_figures_equal
decorator for that purpose).- tolfloat, default: 0
The RMS threshold above which the test is considered failed.
Due to expected small differences in floating-point calculations, on 32-bit systems an additional 0.06 is added to this threshold.
- freetype_versionstr or tuple
The expected freetype version or range of versions for this test to pass.
- remove_textbool
Remove the title and tick text from the figure before comparison. This is useful to make the baseline images independent of variations in text rendering between different versions of FreeType.
This does not remove other, more deliberate, text, such as legends and annotations.
- savefig_kwargdict
Optional arguments that are passed to the savefig method.
- stylestr, dict, or list
The optional style(s) to apply to the image test. The test itself can also apply additional styles if desired. Defaults to
["classic", "_classic_test_patch"]
.
matplotlib.testing.exceptions
#
- exception matplotlib.testing.exceptions.ImageComparisonFailure[source]#
Bases:
AssertionError
Raise this exception to mark a test as a comparison between two images.