tracktable.core.test_utilities module¶
Module contents¶
test_utilities - Functions to verify test output
Our Python tests typically create text files or images. This module contains functions that will let us easily verify that they match our ground truth files, including computing differences where appropriate.
- tracktable.core.test_utilities.compare_html_docs(expected, actual, ignore_uuids=False)[source]¶
Compare two html documents Compares the two documents given and optionally ignores certain uuids (helpful for comparing html output from folium)
- tracktable.core.test_utilities.compare_html_to_ground_truth(filename, ground_truth_dir, test_dir, ignore_uuids=False)[source]¶
Compare an HTML document to a ground truth HTML document Append filename to given paths and compare the HTML documents at those locations. Ignore certain UUIDs if set.
- tracktable.core.test_utilities.compare_image_to_ground_truth(filename, ground_truth_dir, test_dir, tolerance=1)[source]¶
Compare test image to ground truth image.
- Parameters
- Keyword Arguments
tolerance (float) – Number from 0 to 255 describing how much per-pixel difference is acceptable (Default: 1)
- Returns
Error or No Error depending on result of comparison
- tracktable.core.test_utilities.compare_images(expected, actual, tolerance=1)[source]¶
Compare two images pixel-by-pixel.
- Parameters
- Keyword Arguments
tolerance (float) – Number from 0 to 255 describing how much per-pixel difference is acceptable (Default: 1)
- Returns
None if images compare equal, string explaining problem if images are different
Note
At present we delegate this to Matplotlib’s image comparison routine since it does all kinds of nice conversions and measurements. We will bring this in-house if we ever need to.
- tracktable.core.test_utilities.create_random_trajectory(trajectory_class, point_class, min_length=10, max_length=100)[source]¶
- tracktable.core.test_utilities.image_mse(imageA, imageB)[source]¶
Compute the ‘Mean Squared Error’ between two images
The ‘Mean Squared Error’ is the sum of the squared difference between the two images
- Parameters
imageA (Image) – PIL Image A
imageB (Image) – PIL Image B
- Returns
the MSE, the lower the error, the more “similar” the two images are
- tracktable.core.test_utilities.image_pCorr(imageA, imageB)[source]¶
Compute the Pearson correlation coefficient between two images
- Parameters
imageA – Pil Image A
imageB – Pil Image B
- Returns
the maximum Pearson correlation between image A (truth), and all 1-pixel shifts possible of image B.