Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multidimensional learning #22

Closed
wants to merge 1 commit into from
Closed

Conversation

tbekolay
Copy link
Member

This is, I believe, the branch that is closest to working for multidimensional learning. The last time it was tested there were other issues preventing it from working, but now it should be possible to test this out through the learn_nd.py file.

I also had an unanswered question about the for loop in simulator.py starting L424:

The logic in this for loop doesn't make sense to me ... why loop if you're only checking the first group in core.groups ? Or is the break supposed to be inside the if statement?

@hunse hunse self-requested a review August 31, 2018 14:56
@hunse
Copy link
Collaborator

hunse commented Aug 31, 2018

You'd have to ask @celiasmith about that for loop (since according to git blame he wrote it in 3ca8df7), but to me it does look like the break should be in the if block.

I'll try to review this today.

Copy link
Collaborator

@hunse hunse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've finished my online review. I started trying to rebase this, but I ran into a conflict that I don't know how to resolve. Commit 24ed25f consolidated all error information into one message, and I'm not sure how this will work with multidim errors (I'm not familiar enough with that part of the code). So I'd like to leave this for someone else to rebase. Once it's rebased, I'm happy to help make some of the changes I've suggested.

@@ -460,6 +460,7 @@ def build_connection(model, conn):
if not conn.solver.weights:
needs_interneurons = True
else:
# TODO: Does this code ever run? What case is this for?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for if pre_obj is Neurons. I'll add a note.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually fixed in master.

self.loihi.nengo_io_h2c.write(1, [0])
self.loihi.nengo_io_h2c.write(1, [0])
self.loihi.nengo_io_h2c.write(1, [0])
# TODO: fix this when are the close writes?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This phrase does not make sense to me. Could you elaborate?

# TODO: fix this when are the close writes?
# This needs to be the number of dimensions+2
# I've just set it big so we can use lots of dims
self.loihi.nengo_io_h2c.write(25, [0]*25)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we do this in loihi.wait_for_completion(), and there maybe we'd be able to figure out the number of dimensions?

probe = receiver.target
conn = self.model.probe_conns[probe]
dec_cx = self.model.objs[conn]['decoded']
# TODO: Assumes 1 chip
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have an assert for this? Then it will break immediately if that assumption changes.

@@ -25,15 +25,17 @@ void nengo_io(runState *s) {
}

//Communicate with learning snip
int error[2];
//assumes only 1 core is learning
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be asserted somewhere? Maybe it is?

@@ -3,63 +3,69 @@
#include "nengo_learn.h"

void nengo_learn(runState *s) {
int core = s->userData[0];
int error = (signed char) s->userData[1];
int n_errors = s->userData[0];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use the terms error and n_errors a lot of places (not new to this PR, I think), and it's not clear to me what it means. I'm wondering if we could have something more specific. Are these error traces? Could we say n_error_traces or something?

print('saving...')
plt.savefig('pes_learning_snips2.png')
plt.savefig('pes_learning_snips{}d.png'.format(d))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the end goal be to turn this file into a test? That seems like something that I could work on.

@tbekolay
Copy link
Member Author

Closing this in favor of #89.

@tbekolay tbekolay closed this Sep 26, 2018
@tbekolay tbekolay deleted the multiDim-learning-manual branch September 28, 2018 17:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants