-
-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert tests
from dict of dicts to list of tuples to avoid key overwrites
#167
Convert tests
from dict of dicts to list of tuples to avoid key overwrites
#167
Conversation
…rwriting problems.
Note on master commit 804adc running
This corresponds to the
My expected output for that line is actually |
I see, the
gives me
so something is funny with Not sure how this should be addressed within this PR since the PR is about changing unit test formatting. I could comment out the failing unit test so that the PR commit builds. Otherwise I could change the unit test result so that it passes, but that feels like cheating. The other alternative is wait for a fix to what's causing the unit test to fail for this PR to go through. |
Oh, sorry for the spam. I see what's going on. If no precision is supplied in the format string then PDG rules for sig figs on uncertainty sets precision, rather than python defaults. So I think the output from the program is correct, and the expected unit test result is wrong. But even if I fix this one I think there are more unit test failures after this one that will appear one by one. @lebigot what should this PR, which unburies more unit tests, do wrt the unburied unit tests failing? Just go through or wait for resolutions? |
Could you get this running again? I ended up setting this to noqa to get the linter to pass. It would be good to set it up as a pytest style test with the different tests parameterised. |
When reviewing #248, I did the same thing as @jagerber48, deduplicating the keys locally and finding at least one test failure. I had wanted to get back to that as part of removing the |
Yes, I can revisit this. And yes, in doing so I'll upgrade to the here I enumerate the failures I found. Some of the failures look like mistakes in the test expected results, but some of the failures look like they reveal either bugs, or at least ambiguities in the format specification. |
Before the pytest.parametrize refactor a bug involving repeated keys in the dictionary made it so many tests were not actually running. Moving to pytest.parametrize re-exposed these tests but unfortunately some of them are failing.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #167 +/- ##
==========================================
+ Coverage 96.33% 96.46% +0.13%
==========================================
Files 15 16 +1
Lines 1909 1898 -11
==========================================
- Hits 1839 1831 -8
+ Misses 70 67 -3 ☔ View full report in Codecov by Sentry. |
Ok, I refactored the tests to use I manually copied the test from the old structure to the new. I tried hard not to miss anything but I'm only human. I couldn't think of an automated way to do it easily. After doing this I found 8 tests that were failing. These can be seen in this diff. In a following post I'll try to enumerate why I think the tests are failing. Some of them look like typos in the expected output, some of them look like typos in the intended format specification string or a forgetting how edge cases in the formatting work. A few of them are testing cases where I don't actually know what to do expect. For the cases where I truly don't know how to resolve the test case I'm going to leave it as an expected failure and open up a new issue to try to figure out the issue. Going through these tests remind that there are some pretty confusing rules when it comes to |
Failing tests:
I'll resolve all of the failing tests in the next commit. Failing test 5 will continue to fail after these corrections. I think it is a real failure. I'm going to mark it as an expected failure and open an issue. |
Opened #259 |
TLDR: This PR is ready for review.
|
that looks good, thanks for getting to it |
This PR
test_formatting.py
moduletest_format
to usepytest.mark.parametrize
. This solves Bug: repeated dict keys in formatting testing results in some keys going untested. #162 since cases are no longer stored as dict keys (which were accidentally repeated). Now each case is a 4-tuple. Previously test cases with the same value/uncertainty had the option to be grouped together within a dict under that value/uncertainty tuple. That option is lost since I don't really know how to do that inpytest
. I've done this usingunittest.subTest
in the past, I think there might be apytest
extension that does something like this.There is a lot of cruft in comments and possibly tests about subtleties between old versions of python. Hopefully this can be removed soon.
OLD DESCRIPTION:
See #162.
tests
object containing test cases for ufloat string formatting is currently a dict where keys are tuples of value/uncertainty pairs and values are dicts of format string/expected output pairs. Similar tests are chunked together under the same value/uncertainty pair. However, the same value/uncertainty pair appears multiple times in the top level dict. In this case only one grouping of tests is captured in the dict, and all others are overwritten.To avoid this overwriting we can convert the dict of dicts into a list of tuples. In this case elements of the list are tuples whose first element is a value/uncertainty pair tuple and whose second element is a dict of format string/expected output pairs.