Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binary spike encoding #158

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from
Draft

Conversation

arvoelke
Copy link
Contributor

@arvoelke arvoelke commented Dec 18, 2018

This is a work-in-progress / proof-of-concept that seeks to improve the accuracy of a single layer encode-decode by minimizing the noise from encoding the input into spikes.

This idea is courtesy of @tcstewar who suggested that the spike generator could be implemented using a binary code where each spike represents a bit of information in the binary representation of the node's input vector. This uses 2*d*k spike generators to represent any input vector from the (-1, +1)^d-cube with 2^k precision. There is no variability from the ideal PSC, because the synapse is None and the code is transmitted precisely every time-step. All of the error (on the encoding side) comes from quantizing the signed input values to k bits.

The trade-off is this sends O(k) spikes to each input neuron every time-step, whereas the previous on/off encoding scheme was sparse in time (i.e., trading between spike density, dt, tau, and input frequency). To further explore this trade-off, the on/off code should be extended to support k heterogeneous and independent spike trains (i.e., inflate the generator's total spike count by a factor of k to reduce variance by a factor of sqrt(k)).

The implementation is a work-in-progress; I took the path-of-least-resistance in order to test this out as quickly as possible. At the very least this should help provide a starting template for what different encoding schemes might look like in code.

TODO:

  • I need to sort out some issue with this implementation that is making the accuracy far worse than it should be.
  • How should this be generalized to support user-configurable/pluggable spike generators?
  • Unify all encoding logic under the same class (currently split across files).
  • How to get the right scaling factor where it's needed?
  • Compare to on/off encoding replicated k times.
  • Unit test.
  • Regression test.

@arvoelke
Copy link
Contributor Author

arvoelke commented Dec 18, 2018

I refactored the code and added a unit test that demonstrates this works perfectly using the Nengo simulator to do the same math. But sometimes things go wrong with the Loihi emulator and I can't figure out why. The improvement in accuracy seems to depend on some combination of neuron model, stimulus, and use of the defaults:

nengo.Ensemble.max_rates.default = nengo.dists.Uniform(100, 120)
nengo.Ensemble.intercepts.default = nengo.dists.Uniform(-1, 0.5)

The most favourable condition I've found on this branch is a linear ramping input, with the above defaults, and the default neuron model.

linear

In this case nengo_loihi and nengo simulators are nearly equal in accuracy! The master branch is also close, but not as close (not shown). But for other conditions, things fall apart or become even worse than on master, and I'm finding it difficult to debug / isolate the possible sources of error. The mean values look right but there is often some pretty crazy variability. For example, in the simulation below, the only difference from the above is that the input is now a constant 1.

wild_variability

And curiously, it is only so extreme on this branch. It's much less variable on master (shown below), which is counter-intuitive. Could be something to do with homogeneity in response curves?

master

@arvoelke
Copy link
Contributor Author

Ping. I think we should still consider this, as the spike generators are a significant source of error, and this reduces that error by a factor of 2^k for chosen k (see unit test that verifies this).

@tbekolay tbekolay marked this pull request as draft December 13, 2021 21:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant