Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include regression tests for weights=True versus False #224

Open
arvoelke opened this issue Jun 4, 2019 · 1 comment
Open

Include regression tests for weights=True versus False #224

arvoelke opened this issue Jun 4, 2019 · 1 comment

Comments

@arvoelke
Copy link
Contributor

arvoelke commented Jun 4, 2019

As noted in #74 (comment) it would be useful to include some tests that compare the performance of weights=True versus weights=False in order to detect any regression in the performance of DecodeNeurons relative to full-weights, and to serve as a basic benchmark for experimenting with variants and future improvements.

This could be made part of a more general research task that involves determining which situations make more-or-less of a difference.

@arvoelke
Copy link
Contributor Author

arvoelke commented Jun 13, 2019

Using the trick in #230 (comment) here's a comparison of weights=True versus weights=False on a 6D Legendre Memory Unit (LMU):

weights=True
lmu_with_monkeypatch

weights=False
lmu_without_monkeypatch

With all-to-all weights we get a fairly stable history of the input. But with DecodeNeurons things blow up systematically (I've found some variations that look better, but they are qualitatively similar). This is running on the actual hardware on the master branch. I was seeing the same thing in my thesis (which also included the improvements in #132).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant