-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch binding #280
Comments
Would you envision something were multiple SPs can be bound at once like so: base_sp.bind(first_sp, second_sp, third_sp, ...) Or rather something were you can convert a vector to the Fourier space, perform the operations there and then convert back? in_fourier = base_sp.to_fourier()
in_fourier.bind(first_in_fourier)
in_fourier.bind(second_in_fourier)
in_fourier.bind(third_in_fourier)
bound_sp = in_fourier.to_sp() I suppose the first variant would be easier to implement, whereas the second variant gives more flexibility. The examples are of course rough sketches how the API could look like. Actual names would likely be different. |
I was also envisioning something like you might have the following batches/tensors of SPs:
where And then want to do stuff like With the HRR algebra, that can be done by staying in the Fourier domain the entire time, and using NumPy operations that are vectorized across the batch (e.g., |
Something like this already works, you can put A = np.array([spa.SemanticPointer(...), spa.SemanticPointer(...), ...])
B = np.array([spa.SemanticPointer(...), spa.SemanticPointer(...), ...])
C = spa.SemanticPointer(...) # or alternatively np.array([spa.SemanticPointer(...)])
result = A * (B ** 2.5) * C # note the ** operator is not yet implemented, but should work this way once it is But I suppose you're also thinking of an optimization to avoid doing the IFFT followed with the FFT again in-between operations? For that I could think of two approaches:
This could be implemented as additional generator in the |
When I was mentioning the speed-up that is actually what I was comparing to. I used to do this in the above style. And it does indeed work functionally, but it's quite slow. It's much faster to batch it in NumPy because it does everything in a constant number of C routines that each do the work in parallel, as opposed to looping and sequentially invoking a Python method that then dispatches separate C routines for each SP. Here's a quick-and-dirty benchmarking example: import numpy as np
import nengo_spa as spa
rng = np.random.RandomState(seed=0)
n, d = 1024, 256
A = rng.randn(n, d)
B = rng.randn(n, d)
sspA = np.asarray([spa.SemanticPointer(a) for a in A])
sspB = np.asarray([spa.SemanticPointer(b) for b in B])
%timeit sspC = sspA * sspB
%timeit C = np.fft.irfft(np.fft.rfft(A) * np.fft.rfft(B))
assert np.allclose(C, [c.v for c in sspC])
So in this case it's ~10x faster to batch it in NumPy. I'm using a Conda install of |
By @arvoelke in this comment:
The text was updated successfully, but these errors were encountered: