Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discretize to use as much of the dynamic range as possible #83

Open
tbekolay opened this issue Sep 24, 2018 · 1 comment
Open

Discretize to use as much of the dynamic range as possible #83

tbekolay opened this issue Sep 24, 2018 · 1 comment
Labels

Comments

@tbekolay
Copy link
Member

Right now our discretization process uses the maximum and minimums from the actual values of several quantities, including intercepts, which led to issues such as those solved in #69. We should look into other methods for setting the ranges for these discretization processes, as the numerical resolution is not even across the number line, and even if it were we should not be super sensitive to outliers.

@tbekolay
Copy link
Member Author

As we determined in #89, discretization ranges are especially important for networks with online learning (PES rule). If the initial function / weights are in the wrong range, then error signals can easily push weights to under/overflow.

This case might be similar enough to other cases that it can be handled in similar ways. But it also might be case that we'll have to do something different here.

A solution that comes to mind is to determine the min/max for discretization based only on factors like the number of neurons and their properties in the pre ensemble rather than using the connection function, even if that means we do a very bad job at computing the initial connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

1 participant