tracktable.core.test_utilities module

Module contents

test_utilities - Functions to verify test output

Our Python tests typically create text files or images. This module contains functions that will let us easily verify that they match our ground truth files, including computing differences where appropriate.

tracktable.core.test_utilities.compare_html_docs(expected, actual, ignore_uuids=False)[source]

Compare two html documents Compares the two documents given and optionally ignores certain uuids (helpful for comparing html output from folium)

tracktable.core.test_utilities.compare_html_to_ground_truth(filename, ground_truth_dir, test_dir, ignore_uuids=False)[source]

Compare an HTML document to a ground truth HTML document Append filename to given paths and compare the HTML documents at those locations. Ignore certain UUIDs if set.

tracktable.core.test_utilities.compare_image_to_ground_truth(filename, ground_truth_dir, test_dir, tolerance=1)[source]

Compare test image to ground truth image.

Parameters
  • filename (str) – Filename for image

  • ground_truth_dir (str) – Path to ground truth directory

  • test_dir (str) – Path to test directory

Keyword Arguments

tolerance (float) – Number from 0 to 255 describing how much per-pixel difference is acceptable (Default: 1)

Returns

Error or No Error depending on result of comparison

tracktable.core.test_utilities.compare_images(expected, actual, tolerance=1)[source]

Compare two images pixel-by-pixel.

Parameters
  • expected (str) – Filename for expected image

  • actual (str) – Filename for actual image

Keyword Arguments

tolerance (float) – Number from 0 to 255 describing how much per-pixel difference is acceptable (Default: 1)

Returns

None if images compare equal, string explaining problem if images are different

Note

At present we delegate this to Matplotlib’s image comparison routine since it does all kinds of nice conversions and measurements. We will bring this in-house if we ever need to.

tracktable.core.test_utilities.create_random_point(point_class, add_properties=False)[source]
tracktable.core.test_utilities.create_random_trajectory(trajectory_class, point_class, min_length=10, max_length=100)[source]
tracktable.core.test_utilities.create_random_trajectory_point(point_class)[source]
tracktable.core.test_utilities.image_mse(imageA, imageB)[source]

Compute the ‘Mean Squared Error’ between two images

The ‘Mean Squared Error’ is the sum of the squared difference between the two images

Parameters
  • imageA (Image) – PIL Image A

  • imageB (Image) – PIL Image B

Returns

the MSE, the lower the error, the more “similar” the two images are

tracktable.core.test_utilities.image_pCorr(imageA, imageB)[source]

Compute the Pearson correlation coefficient between two images

Parameters
  • imageA – Pil Image A

  • imageB – Pil Image B

Returns

the maximum Pearson correlation between image A (truth), and all 1-pixel shifts possible of image B.

tracktable.core.test_utilities.image_shifter(imageA, imageB)[source]

create a resized truth, and shifted, resized results.

Parameters
  • imageA (Image) – PIL image A

  • imageB (Image) – PIL image B

Returns

a 9 x rows-2 x cols-2 x channels numpy array

tracktable.core.test_utilities.pickle_and_unpickle(thing)[source]
tracktable.core.test_utilities.set_random_coordinates(thing, min_coord=- 85, max_coord=85)[source]
tracktable.core.test_utilities.set_random_properties(thing)[source]
tracktable.core.test_utilities.version_appropriate_string_buffer(contents=None)[source]