Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion for cleaner normalization implementation #117

Open
ychong opened this issue Jan 29, 2018 · 1 comment
Open

Suggestion for cleaner normalization implementation #117

ychong opened this issue Jan 29, 2018 · 1 comment
Assignees

Comments

@ychong
Copy link

ychong commented Jan 29, 2018

Hi,

If it's not too much, I have a suggestion for a cleaner implementation of x_vals normalization.

Instead of defining a separate function, and since the objective of normalization is just to allow for faster convergence, we could perform the following:

x_vals_train = x_vals_train / x_vals_train.max(axis=0)
x_vals_test = x_vals_test / x_vals_test.max(axis=0)

This implementation is cleaner and achieve similar loss results. Do let me know what you think.

Sincerely,
Yi Xiang
[email protected]

@nfmcclure
Copy link
Owner

I'll look into this. Thanks!

@nfmcclure nfmcclure self-assigned this Mar 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants