-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No improvement in performance using @codon.jit #607
Comments
You are converting from python to codon a list of 100000 elements (test_data) one by one. Better do the entire loop in codon |
I just want to test whether @codon.jit would improve the performance of |
get_median will only improve if you are not converting data from python to codon inside the function. But you are not testing the function isolated, you are testing the whole loop. Even if get_median did NOTHING in codon's version, it will probably be slower. |
After doing some more tests, even this will be slower as it needs to start the JIT:
So for a "fair" comparation you should call do_nothing() to warm up the JIT before running your test |
So I think maybe the best way is to compile the codon code into python extention and use it in python code instead of JIT way. Could you please help me with the import error described in #605 |
That will not help either, as this test is doing lots of allocations and codon is not optimized for that as well as python is. My tests show that you will get similar performance as with the non warmed jitted full loop |
In #605 I have uploaded a test in a repo which is a lot faster than the python code, you can check that. But I cannot import the class into python, the code is compiled in codon file. |
Weird, I have run your code (removing the import codon and the @codon.jit) directly with codon and I do not see any improvements. |
Here is my test code:
The result is
Why it is slower than the normal version?
The text was updated successfully, but these errors were encountered: