Fast bilinear interpolation in PyTorch on a regular grid on a unstructured set of 2D query points, analogous to Scipy's RegularGridInterpolator.
pytorch_interpolation runs entirely in C++/CUDA backends, thus significantly outperforming Scipy.
This repository implements a C++/CUDA extensions for PyTorch (in the style of https://github.com/pytorch/extension-cpp)
At present, only bilinear 2D interpolation is implemented and padding/linear extrapolation is applied to points outside the domain of the cartesian grid. The method assumes that the function is known is a on a regular grid.
First, you need to have PyTorch installed. Now you are set up and ready to install the repository
pip install .
To run the examples provided, install numpy, scipy and matplotlib:
pip install numpy scipy matplotlib
The repository was tested with Python3.8 and PyTorch with CUDA version 12.1
The package works similarly to Scipy's RegularGrid. In example.py
we provided a simple example and comparison between our interpolation and Scipy's RegularGridInterpolator.
The syntax is analogous to that of Scipy's RegularGridInterpolator
In a nutshell, suppose that:
x
andy
are the x and y grid of data points you want to interpolate over (given in ascending order)F
is the function values at these pointsxpt
andypt
be the N dimensional tensor of interpolation query points
then you can call
from pytorch_interp import RegularGridInterpolator
interp = RegularGridInterpolator(F,x,y,xpt,ypt)
G = interp(xpt,ypt)
Optionally, the constant fill value padding can modified as fill_value=value
. Also, bilinear extrapolation is now possible by setting fill_value=None
(just like in Scipy).
The script performance.py
tests the performance test for pytorch_interp in cpu/cuda vs scipy and torch_interpolations, another pytorch package for bilinear interpolation written entirely in Python.
All tests are done on a 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz CPU and a NVIDIA GeForce RTX 3060 GPU. For the CPU performance tests, 8 threads are run in parallel in Torch.