-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make diagonal=True
smarter?
#37
Comments
A related idea is to always use a forward derivative for the smallest element of the x array and backward derivative for the largest element in automatic mode |
Re: #37 (comment), it depends on the intent of relative |
Re: #37 (comment) , that makes sense, but then you lose speed when one-sided differences aren't needed at all. I like the current approach - start with central difference, see whether function values are NaN, and if so, use one-sided. The difference is that it shouldn't be an all or nothing switch, using the same method for all elements of |
The step must be both smaller than x and like you say it should not be too small. Using a constant step size is fine if the x values are of same order of magnitude, but people may also use an x array created with numpy.geomspace so that it spans over many orders of magnitude. |
Me, too, but when you mentioned that you want to use this, for example, with a scipy pdf, you may not necessarily get NaN for the central difference result. If the pdf just returns 0 outside of the domain, then applying the central difference formula on the edge point will not give NaN but some finite value. In other words, just checking for NaN is probably not enough. If we keep the current scheme of computing central differences first and only do something different if that fails, we should also check whether the central difference is close to the average of the forward and backward difference estimates. |
Ah, right. Personally, i think it is still enough for automatic detection, though. You are making an effort to detect an obvious case where central difference is invalid, but you can't easily detect all cases, so the line needs to be drawn somewhere. How about allowing I suppose it is a matter of philosophy - how you want to balance speed and ease of use (in extreme cases). |
I see. Instead of making the step linearly proportional to the magnitude of A separate issue is subtractive cancellation of the output. We can measure the subtractive loss of precision by comparing the magnitude of the difference between function values to the magnitude of the function value, right? That can be detected in the termination condition, and it can either be considered in the reported error or trigger some corrective action. |
I know this trick and rejected this, because computing a better h from (x + h) - x is expensive. We already achieve machine-accuracy without this trick. The small error introduced by not using an h that is exactly representable is overcompensated by the polynomial extrapolation. |
From discussion with @mdhaber
Yes, that's a speed trade-off. I am assuming here that the x-values are roughly of equal scale and don't vary a lot in magnitude. If they do, then diagonal=True should not be used. This needs to be properly documented at the very least.
The ideal solution in my view would be an algorithm that first groups x-values of similar magnitude together in blocks and then does the calculation using the same absolute step for those blocks. The speed of jacobi comes from doing array calculations as much as possible. Such an algorithm would give the same result as diagonal=True in the ideal case and would fall back to the slow "one-x value at a time" worst case if necessary.
The text was updated successfully, but these errors were encountered: