Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure defaults for optimal Loihi performance #126

Merged
merged 3 commits into from
Jun 11, 2019
Merged

Configure defaults for optimal Loihi performance #126

merged 3 commits into from
Jun 11, 2019

Conversation

hunse
Copy link
Collaborator

@hunse hunse commented Oct 15, 2018

In #73, we started talking about whether the current defaults are actually the best. I chose them pretty arbitrarily, so it's unlikely that they are.

This PR is a place for us to collect changes to the defaults that improve performance across a variety of models. With many of the Nengo benchmarks now becoming available, those could be a good way to test potential changes.

To start, I'm correcting something that was a mistake on my part. Since intercepts happen after encoders, we actually just need to lower the upper bound on intercepts (away from one) to avoid high gains. The lower bound can (and I think should) stay at -1. I'd like to test this change on some benchmarks, though, to see if it's actually better.

EDIT: I just went and rebased this to the integrator-accuracy branch (#124), since that makes some pretty significant changes to how we choose weights that could have an effect on accuracy measurements.

@hunse
Copy link
Collaborator Author

hunse commented Oct 15, 2018

This is also related to #83, since how we scale weights will determine how sensitive we are to large weights/gains, which is the reason for limiting intercepts.

@tbekolay
Copy link
Member

After rebasing, the only commit not yet incorporated in master is bd5df97 which I believe is an improvement over the current state of master.

@drasmuss
Copy link
Member

drasmuss commented Jun 3, 2019

This causes one of the nengo core tests to fail (see https://travis-ci.com/nengo/nengo-loihi/jobs/205047436#L3674).

@hunse
Copy link
Collaborator Author

hunse commented Jun 3, 2019

Hmm, that's strange. That test is making sure that inhibition works correctly to silence neurons.

In core Nengo, you get the output you'd expect. The bottom plot is showing neuron voltages.
test_connection.test_node_to_neurons.pdf

Even on master of this repository, surprisingly some neurons are firing, however not enough to break the tolerances and fail the test.
test_connection.test_node_to_neurons.pdf

This branch seems to make things worse, with more neurons firing when they should be silent.
test_connection.test_node_to_neurons.pdf

So something is definitely going on here that shouldn't be, but the problem seems to go beyond this branch.

@hunse
Copy link
Collaborator Author

hunse commented Jun 3, 2019

Figured it out. When we do node->neuron connections, we do the transform on the host (before we turn the value into spikes). This works fine for transforms that keep the output in the range [-1, 1], but for ones like in this test (the input is 1 and the transform is -5), it doesn't work so well. When neurons have larger biases (as in this PR), there's more bias that the negative input has to overcome, so it does worse.

Not sure what the best fix is.

@hunse
Copy link
Collaborator Author

hunse commented Jun 5, 2019

Ok, we're now putting weights on the chip when doing any connection into neurons. I think this has a number of advantages (see the commit message 32a5873). It does make the mapping behaviour slightly more complicated, since now there are differences depending on both the pre and post objects.

Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, merging once CI finishes.

@tbekolay tbekolay force-pushed the defaults branch 4 times, most recently from 23d7dd5 to 96f056f Compare June 10, 2019 22:42
drasmuss and others added 3 commits June 11, 2019 13:34
And raise decent errors as soon as possible.

Co-authored-by: Trevor Bekolay <[email protected]>
Since intercepts are applied after the encoders, it's only the upper
bound that needs to be less than one to avoid high gains. The lower
bound should go to -1.

This changed the intercepts for test_nengo_comm_channel_compare,
causing it to fail. The test was fragile because it did not apply
sufficient filtering to the output. The new test filters the outputs
more and can thus have tigher tolerances (passing on 10 seeds).
The transform is now applied on-chip When doing a connection from
a host object to on-chip neurons. This helps avoid scaling issues
(e.g. if the transform has output outside [-1, 1]), and avoids
large off-chip transforms (since such transforms are often large).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants