-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test suite fails #24
Comments
This is with the latest release: I'm at the v3.3.0 tag |
I don't have those failures. What version of Python are you using? Is there a reason you aren't using pytest to run the tests? It will give more details on the error failure if you do. There's a tox.ini in the repo. The README says:
|
Hi. This is Debian. It's doing the management of the dependencies and the environment, not tox. I believe I'm running the correct test; yet? This is the Python we're about to ship in the new Debian release:
It looks like the test is trying to match up error reports, but these are meant for humans, so the exact text is allowed to change. The test failure is unhappy because this:
Does not look like this:
Thanks |
I don't understand why the traceback would be different than expected. This test passes in many environment, including Ubuntu, which is based on Debian. Is Debian somehow interfering with error reports? Since cog is a tool for developers, and since it changes how Python is executed, I wanted to make sure the error report were useful. That's why I'm checking the entire traceback. |
Hi Ned.
Ned Batchelder ***@***.***> writes:
I don't understand why the traceback would be different than expected.
This test passes in many environment, including Ubuntu, which is based
on Debian. Is Debian somehow interfering with error reports?
The distro doesn't matter. This is almost certainly something that
changes in different versions of Python, and I'm guessing you've only
looked at older Python builds. You're looking at a message intended for
humans, and expect exact text. This was never guaranteed to be stable,
and it looks like it has changed. Can you convert your test to
regex-search for the strings you actually care about? Or better yet, the
test should look only at output intended for machines.
Also, the warnings fixed by the patch in the previous email were
legitimate. Would you consider merging the patch?
Thanks
|
Thanks, I've changed all the file opening to use I would still like to understand why your traceback looks different. I understand your point about tracebacks meant for humans, but they are also supposed to accurately report the locations causing errors, and yours does not. Is there some way I can reproduce what you are seeing? Is there a Docker image? |
Hi. Thanks for making the changes.
To reproduce, install Debian/bookworm (the latest release), and build
there. I don't recall how exactly I was trying to run the tests, but
currently I simply disable the whole test suite:
https://salsa.debian.org/python-team/packages/python-cogapp/-/blob/a7938f23bd9d0a986c7ec2fa49f635422ed196ee/debian/rules#L13
You can replicate the package build:
- apt-get source python3-cogapp
- cd python3-cogapp
- dpkg-buildpackage -us -uc -b
You can remove the "override_dh_auto_test" business from debian/rules,
and rebuild the package to see what it says. Let me know if you need
help. I'm pretty sure I was doing this without tox, but I don't recall
the details at the moment. If you want to try to nail this down, I'm
happy to work on it again
|
I don't think I will be able to install bookworm, and I don't know Debian well enough to update debian/rules. The more details you can provide, the more likely we can get to the bottom of it. |
I think Debian (Bookwork or otherwise) is a red herring. I'm on macOS, running Python 3.11.6 and cog commit 201ad15.
I'm not saying whether it should or shouldn't work but it seems OS-independent. When I print the tracebacks I see what's already posted above (#24 (comment)). |
Digging into the difference between pytest and unittest, I see these values when run under unittest:
I don't know why the stack trace is different between the two test runners, but it doesn't matter. The tests pass when run with pytest. Don't use unittest here. |
Hi. I'm making a cogapp package for Debian. I'm on a recent Debian system, and there are a bunch of failures of the test suite. Does everything pass for you?
I do this:
There are many loud arnings about unclosed files. This patch makes the complaint happy:
But this still leaves two test failures:
Thanks
The text was updated successfully, but these errors were encountered: