-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve speed of RBF-interpolation #25
Comments
Can you try creating the |
Hello Mr. Hines, Thanks for the quick response. I have two results here for a small test scattered dataset (x_obs = 9000,3). It's a difference plot of the interpolated and target values. The LHS is obtained using the default parameters. It takes 14s for the interpolant generation, and 78s for evaluating the values (u_itp) at 10^6 locations. RHS is done using "neighbors = 50". This takes 4s for the interpolant, however 278s for evaluating the values. The accuracy deteriorates as you mentioned. Can you tell me the default value of neighbors parameter? Also, can you tell me why the time for evaluation of "u_itp" increases on reducing the neighbors? Thanks once again, |
Suppose you have N observation points. If You may also find more information here https://rbf.readthedocs.io/en/latest/interpolate.html |
Thank you so much for the explanation and the link. So, as I understand, this is a difference between global and local interpolation. Mentioning "neighbors" causes local interpolation, which is faster but also creates discontinuities in the interpolated function. I have attached a 2D figure of a model scattered grid in my case. As you see, the grid is dense around the nuclei and gets sparse as I go far away. When doing local interpolation, I believe I will have issues for the evaluation points at these sparse regions of observables, as the neighbors are not really neighbors for these peripheral evaluation points. Can you suggest me a way to deal with this issue and improve the discontinuity/accuracy? For example, I was thinking optimizing the "sigma" or the "eps" parameters? Thanks once again for the fast response, |
Your understanding of how Is there any noise in your observations that would cause you to not want to fit the data exactly? If not, then you should leave the To get better neighborhoods for local interpolation, maybe you can try some sort of transformation that makes the observation points more evenly distributed. For example, warp each point's distance to the center of the grid by |
Mr. Hines, thank you once again for your suggestion. I have to learn about the concept of "warping", as I am not familiar with it. I will try out your suggestion. Best regards, Paul. |
Hello Mr. Hines,
I am trying to interpolate a scattered set of data points in 3D (x, y, z, phi) to a regular rectangular 3D grid in a "smooth" manner, i.e. with derivative continuity Using RBFInterpolant, I tend to get pretty accurate performance, but I wanted to know how to improve the speed of the process. My scattered dataset is around 10^4 x, y, z values (x_obs = (52850,3) ; u_obs = (52850,1)) and I want to interpolate them to 10^6 regular grid points (x_itp = 100^3), i.e. a mesh of 100 grid points on each of x, y and z axis.
Currently, it's taking more than 30 mins to perform the interpolation process. I observed that you are using K-NN. Is it this part that's taking the maximum time? Or is it the LU decomposition part? What is the bottleneck? Could you suggest me ways to speed up the performance?
A separate issue is related to memory. When used for x_obs = (225000,3) scattered points, I am getting the error :
_'numpy.core.exceptions.MemoryError: Unable to allocate 377. GiB for an array with shape (225044, 225044) and data type float64'
Is there a way to bypass this?
Thanks in advance,
Paul
The text was updated successfully, but these errors were encountered: