You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the current workflow, if there are hash library test failures for a pytest session, then these are easily discoverable thanks to the rich loveliness of the html figure comparison.
However, given any single hash library failure, the outcome is essentially three-fold:
the graphic test has detected a genuine failure that requires to be resolved (somehow)
the tolerance of the test requires to be increased
a new hash value requires to be associated with the test and recorded in the hash library along with a new image
The choice of including or not including a new hash can result in a blend of the existing hash library and a new hash library (as opposed to accepting all new hashes).
We could extend the workflow to offer the user the choice to manually accept or reject any new proposed hashes. This would ease the burden of customising the hash library and also resolving any result-failed-diff.png images, all of which should then be placed under version control (somewhere).
We adopt this approach on SciTools/iris and SciTools/tephi, for example see SciTools/tephi#78.
The text was updated successfully, but these errors were encountered:
I am not sure I am 100% following what you are saying here.
I don't think there should ever be the case where a hash comparison fails. i.e. you shouldn't be using the tolerance parameter in the image test to "bypass" a hash fail. So I am not entirely sure why you would ever want to not accept all the changes to a hash library.
(I am not sure that you can't do what I think you are suggesting, I just don't think you should)
For the current workflow, if there are hash library test failures for a
pytest
session, then these are easily discoverable thanks to the rich loveliness of thehtml
figure comparison.However, given any single hash library failure, the outcome is essentially three-fold:
tolerance
of the test requires to be increasedThe choice of including or not including a new hash can result in a blend of the existing hash library and a new hash library (as opposed to accepting all new hashes).
We could extend the workflow to offer the user the choice to manually accept or reject any new proposed hashes. This would ease the burden of customising the hash library and also resolving any
result-failed-diff.png
images, all of which should then be placed under version control (somewhere).We adopt this approach on SciTools/iris and SciTools/tephi, for example see SciTools/tephi#78.
The text was updated successfully, but these errors were encountered: