-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure defaults for optimal Loihi performance #126
Conversation
This is also related to #83, since how we scale weights will determine how sensitive we are to large weights/gains, which is the reason for limiting intercepts. |
After rebasing, the only commit not yet incorporated in master is bd5df97 which I believe is an improvement over the current state of master. |
This causes one of the nengo core tests to fail (see https://travis-ci.com/nengo/nengo-loihi/jobs/205047436#L3674). |
Hmm, that's strange. That test is making sure that inhibition works correctly to silence neurons. In core Nengo, you get the output you'd expect. The bottom plot is showing neuron voltages. Even on master of this repository, surprisingly some neurons are firing, however not enough to break the tolerances and fail the test. This branch seems to make things worse, with more neurons firing when they should be silent. So something is definitely going on here that shouldn't be, but the problem seems to go beyond this branch. |
Figured it out. When we do node->neuron connections, we do the transform on the host (before we turn the value into spikes). This works fine for transforms that keep the output in the range Not sure what the best fix is. |
Ok, we're now putting weights on the chip when doing any connection into neurons. I think this has a number of advantages (see the commit message 32a5873). It does make the mapping behaviour slightly more complicated, since now there are differences depending on both the pre and post objects. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, merging once CI finishes.
23d7dd5
to
96f056f
Compare
And raise decent errors as soon as possible. Co-authored-by: Trevor Bekolay <[email protected]>
Since intercepts are applied after the encoders, it's only the upper bound that needs to be less than one to avoid high gains. The lower bound should go to -1. This changed the intercepts for test_nengo_comm_channel_compare, causing it to fail. The test was fragile because it did not apply sufficient filtering to the output. The new test filters the outputs more and can thus have tigher tolerances (passing on 10 seeds).
The transform is now applied on-chip When doing a connection from a host object to on-chip neurons. This helps avoid scaling issues (e.g. if the transform has output outside [-1, 1]), and avoids large off-chip transforms (since such transforms are often large).
In #73, we started talking about whether the current defaults are actually the best. I chose them pretty arbitrarily, so it's unlikely that they are.
This PR is a place for us to collect changes to the defaults that improve performance across a variety of models. With many of the Nengo benchmarks now becoming available, those could be a good way to test potential changes.
To start, I'm correcting something that was a mistake on my part. Since intercepts happen after encoders, we actually just need to lower the upper bound on intercepts (away from one) to avoid high gains. The lower bound can (and I think should) stay at -1. I'd like to test this change on some benchmarks, though, to see if it's actually better.
EDIT: I just went and rebased this to the
integrator-accuracy
branch (#124), since that makes some pretty significant changes to how we choose weights that could have an effect on accuracy measurements.