Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clip weights during learning to emulate chip #98

Closed
wants to merge 78 commits into from
Closed

Conversation

hunse
Copy link
Collaborator

@hunse hunse commented Sep 28, 2018

This helps make the emulator more like the chip, based on differences discovered in #89.

One question is where does the number 128 come from? I thought the weights were 8 bits plus a sign bit, which would give us a range of 256. However, 128 is the number that seems to replicate the behaviour.

Another question is do we want to clip or overflow? The chip appeared to have much weirder (noiser? rougher?) behaviour when the weights were nearing the threshold, which could indicate that they're overflowing instead of being nicely clipped. This would also fit the normal chip behaviour of overflowing instead of clipping.

It would be good to have a test that roughly compares the emulator and chip behaviour. This could be the test from #89 with the function=lambda x: 0 added back in on the learned connection, since this is what illuminated the problem originally.

Based off #89.

tbekolay and others added 30 commits August 19, 2018 16:30
The overall structure of the documentation is copied from other
Nengo projects. Much of the content is adapted from the docs
currently in the wiki, with slight changes to make it more generic
to any Loihi setup and not necessarily ours. The overview
and API docs are new. Some classes that made sense to include in
the API docs had sparse docstrings, so those were added where
appropriate. Examples were moved to the docs folder for easier
inclusion and fewer folders in the root directory.
The input stream has shape `(n_steps, input_dimensions)`, so before we were always running for `input_dimensions` (390) timesteps.
And add a docstring for loihi_api.VthProfile.
For both the emulator and hardware. Note that these are set to
match for the learn_communication_channel notebook, and won't
necessarily match in other places, but since we render this
notebook in the docs it's important that it matches Nengo here.
Mostly copied from nengo_extras, but also with some of the doc
building from nengo_dl.

Also contains fixes for the static check issues raised with
the new scripts.
Previously it was not clear where to get nengo_loihi.
Most people will be interacting with Loihi through the INRC
superhost.
Includes three networks with semi-realistic scenarios.
This reverts part of the "Fix dt scaling on connections" commit,
which was partly done in an effort to fix RectifiedLinear neurons,
which have different scaling properties than LIF neurons.
Those issues still exist, but rather than hold up the release
fixing them, we instead disable ReLu neurons for the time being
and add back the weight scaling.

This commit reverts the changes to test_ens_neurons_on_host,
but also modifies the test to use better variable names and
add comments as to what the test is supposed to be doing.
I added a test for ensemble to neuron connections, which failed
because the weights were scaled up by 1/dt. However, the node to
neuron connections were scaled properly, so what ended up working
was to revert the previous reverting of the dt scaling.
However, doing this caused `test_n2n_on_host` to fail when the pre
ensemble was simulated off chip and the data sent into the post
neuron which were on chip, so to fix that the transform when
pre is a `ChipReceiveNeurons` object is now scaled by 1/dt.
nxsdk imports Matplotlib, meaning that we cannot switch to the
'Agg' backend once nxsdk is imported, which occurs between
loading the root's conftest.py and nengo_loihi/conftest.py.
The fix is to use the correct Vth scaling. The previously skipped
tests are now reenabled with the SpikingRectifiedLinear neuron type.

Co-authored-by: Daniel Rasmussen <[email protected]>
This requires Python >= 3.5. We only require Python >= 3.4,
which is now enforced through setup.py.
This used to fail because neuron_type (with amplitude) was not
passed along with spikes.

The keyword_spotting examples was fixed to set the amplitude
in the neuron type, and to use a more appropriate amplitude
since the dt scaling was fixed.
This should not make a difference to the existing execution,
since the changed case does not have interneurons, and thus
mid_cx is pre_cx. This change clarifies the logic if interneurons
are added to any of these types of connections down the line.
Also adds documentation for CxGroup and defaults for scaleV
and scaleU.
We now require scaling on neuron input currents, as a long
time constant for which the input is not scaled can lead to
unexpected behaviour.

We also require voltage scaling for LIF neurons.
This helps with weight scaling issues.

Addresses #8.
These rate functions account for Loihi discretization and
allows us to find better decoders for LIFs and ReLus.
This has not yet been properly tested.
tbekolay and others added 4 commits October 11, 2018 21:58
The main change is the introduction of the SplitNetworks class,
which keeps track of what's going on during the splitting process.
This makes the placing and splitting functions easier to debug
and test, as evidenced by the new tests added to test_splitter.py.

Most of the behavior from the previous splitter is unchanged
in this commit, though some subtle bugs may have been fixed through
the refactoring process. One deliberate improvement is to
explicitly place ensemble acting as the `pre` of a learned
connection off-chip, unless this is overridden by the user (though
this will end up resulting in an exception).
No longer transforms neuron connections into decoded connections.

Raises an explicit NotImplementedError for learning rules on
non-decoded connections.
When multiple tau_s values are requested, we use the longest
requested tau_s. That means that if the user requests tau_s=0
we will use it despite it being shorter than the default 0.005,
and if the user requests 0.1 and then 0.05 we will use 0.1
because it is the longest requested tau_s.

Co-authored-by: Eric Hunsberger <[email protected]>
@hunse hunse force-pushed the learn-clipping branch 4 times, most recently from fdef62c to c526034 Compare October 23, 2018 19:29
This is necessary to ensure that we switch directories and the
plots end up in the proper place.
Includes a test that compares emulator with Loihi to make sure
it is the same.
This allows 'simreal' to still work, we just don't model overflow.
Since the current one is broken.
Before, we were checking that the number of learning axons was greater
than the first learning index, which doesn't make sense.
- Added comparison to Nengo to standard PES learning test.
- Also simplified the PES learning test so that it no longer requires
  fitting, it just makes sure that the final learning output is near
  the target.
- This replaces the comparative test.
@hunse
Copy link
Collaborator Author

hunse commented Nov 12, 2018

Closed in favour of #139

@hunse hunse closed this Nov 12, 2018
@hunse hunse deleted the learn-clipping branch November 12, 2018 20:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants