-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenBLAS on windows substantially slower than other BLAS flavours #160
Comments
All these numbers look a little strange to me - why would MKL be only marginally better than the entirely unoptimized reference implementation ? And I'd expect even the baseline OpenBLAS - an MSVC build that can only make use of the generic C sources - to be slightly faster than the reference as well. |
The answer is almost certainly that the cumulated runtime of blas/lapack calls as a portion of the total runtime of the entire scipy test suite gets dominated by random variability of the CI agent (memory and CPU contention, caches etc.). Under this hypothesis, only when something takes way longer on average in the blas/lapack interface does it actually make a difference. (The difference between CI agents is likely also determined by other aspects than just the available CPU instructions too - numpy does runtime dispatching, but none of our blas/lapack builds have been compiled to take advantage of avx instructions, for example) |
Sounds like the issue in OpenMathLib/OpenBLAS#4582. Was the |
That issue was fixed in OpenMathLib/OpenBLAS#4587 which landed in 0.3.27 (which is the version we're using across conda-forge already). |
Hmm, that needs investigating then. The timings make it almost certain to be a similar deadlock lock contention issue. It's a 10x-100x slowdown for BLAS/LAPACK calls to explain the test suite being that slow. There was a similar issue at scipy/scipy#20585 (comment). |
There have been no other changes to the Windows thread server code in 0.3.27 since then (as far as I am aware right now), so I think the best option for testing would be to swap in the old version of blas_server_win32.c from OpenMathLib/OpenBLAS@66904f8 (this will probably need adding back the global declaration of |
(though I would assume that any residual problem in mseminatore's PRs should have come up when testing the fix in PR4587) |
The test case from OpenMathLib/OpenBLAS#4582 passes for me without any apparent delays, so I don't think it's that. |
Also seems to me the "similar" issue is/was plagued by some kind of infighting between duplicate libraries ? |
That's an interesting conjecture, though I don't see how that can happen in conda-forge, where we generally take care to unvendor things so there's only one copy. Especially for numpy and scipy, where we keep a close look at the builds |
Can you post a link to the logs? |
Here you go (plus any other still remaining runs of |
First impression is that it appears to be |
In those PRs you cannot just look at the overall runtime, because we're trying to forcefully make a distinction between CPUs with/without AVX512F/AVX512CD, and the abort if the CPU architecture expectation isn't met (because some past & present failures had different behaviours, and azure doesn't provide a way to influence that). However, if that happens, then there's zero runtime of the test suite. On windows, roughly everything over 60min actually ran the tests, everything under 50min didn't, and in between it depends on circumstances (fast or slow agent). |
It's quite unlikely that a single test (or anything less than widespread/systemic) can blow out the test times like that. This is especially the case as many tests are completely unrelated to blas. Tests also have a 20min timeout, and none of them is hit. |
umm, did you ever get around to testing with the pre-0.3.26 blas_server_win32.c as per #160 (comment) ? |
Thanks for the reminder, I didn't understand the ask at the time, but I've dug a bit into the git history and I think/hope I got it right; currently building an openblas version with this in #162; we might already see a difference in the blas testing on this feedstock (compare "Good news |
thank you - meanwhile it looks like i might have to revert that PR anyway due to the sunpy thread safety issue that came up on numpy :( |
In the context of the BLAS variant testing for scipy, I noticed that the OpenBLAS runs were much slower. I did some basic timing comparisons based on what pytest reports as the overall runtime of the scipy test suite (the timing also depends quite a bit on whether the agent has AVX512F/AVX512CD or not, and this is random across Azure's fleet, so I'm taking the average across implementation & CPU type):
Overall, OpenBLAS ends up being between 3-5x slower than all the other BLAS/LAPACK implementations, which to me is indicative of something going very wrong somewhere.
CC @martin-frbg @rgommers
The text was updated successfully, but these errors were encountered: