Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

At run-time, selectively ignore "small" operations #36

Open
seldridge opened this issue Jul 11, 2016 · 0 comments
Open

At run-time, selectively ignore "small" operations #36

seldridge opened this issue Jul 11, 2016 · 0 comments

Comments

@seldridge
Copy link
Collaborator

This relates to some of my early thoughts about this, specifically related to gradient descent (if a derivative is going to evaluate to zero, ignore it), but both feedforward and learning transactions could potentially benefit by selectively skipping operations. According to Brandon Reagen and the Minerva work at ISCA 2016, power could be improved by at most 2x, though this includes both static pruning and run-time pruning and it is unclear of their respective contributions. This would be an interesting avenue and is generally low cost to implement.

In broad strokes without dramatic modifications to DANA:

  • Skip any input value that's significantly below the currently accumulated neuron value
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant