-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update testing suite to pytest #655
Update testing suite to pytest #655
Conversation
Remove python3.12 test
Remove nose
Switch script tests to pytest
Thank you for this @eachanjohnson! You're right that I've confimed the 18 failed tests with python 3.11 outside of the github actions too. Given past experiences and looking into a few of the failures, I'm assuming this is due to changes in how dictionaries work between py3.10-3.11, but I can't see anything in the release notes relating to this. @IanSudbery - Do you have any idea what's going on here? |
So I had a quick look at one test. There seems to be a couple of things going on here. One is the sort order has gotten out of whack - Taking the first failed test:
There are 2866 differences:
However, if we sort there are only 3:
My guess is that this must be a change in how pysam (and possibly htslib/samtools) sorts bam files. There are still three lines different though:
Three extra reads that arn't in the reference. |
The sort order issue can probably be sorted by add more sort directives in the tests. I believe the remaining issues are caused by changes in which way ties are being broken when a pseudo-random choice between two equivalent UMIs is being made. At the moment, I think we just choose whichever comes out top from One way in which this might have changed is that the default hashing algorithm for @TomSmithCGAT This leaves us in the somewhat difficult position of trying to decide whether to implement different tests for different versions, or finally chase down dealing with getting something predictable out of extracting things from a dict. One alternative would be to explicitly check whether the top choice is a tie, and use an explict random choice to pick. I don't know how much slow down this would give. |
I think we can just merge #550 and be done with the non-deterministic issues, right? For some reason, I thought that was waiting on you running some benchmarks, but I see that was done a full year ago 😱. If you're still happy with the changes in the code in #550, I'll resolve the conflicts and update the test files so we can merge that PR |
@TomSmithCGAT If you had time to do that, that would be great. |
Hi @eachanjohnson. I finally got around to merging #550, which should resolve the non-deterministic issue causing issues here. I tried to make push commits to this branch so it will pass the tests but I got a permission denied error. It's just a few steps:
After that, you should see that the tests pass regardless of whether |
Thanks, @TomSmithCGAT! Feeling ambitious, I added python3.12 to the tests, which also passed 🚀 |
@eachanjohnson Thats for doing that. @TomSmithCGAT Are we good to merge? |
Yes. All good. I'll do that now. Thank you again @eachanjohnson 🙏 |
In running tests for PR #654, I ran into an issue with testing on my python3.11 platform. The nose package itself was running into import errors, and on further digging does not seem to be maintained any longer.
So I've lightly rewritten tests/test_style.py and tests/test_umi_tools.py to use pytest.
The tests now run for python3.11, so I expanded the range of Python versions in GitHub actions. Unfortunately,
umi_tools
doesn't compile under python3.12, which I think is a known issue #627.Also, not all the tests run successfully, which I guess is related to the non-deterministic issue #362 and PRs #550, #98.
This may not be the kind of contribution you're looking for, but I ended up doing the work so I thought I'd hand it over 😄