Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Early stopping #55

Open
gngdb opened this issue Mar 10, 2015 · 2 comments
Open

Early stopping #55

gngdb opened this issue Mar 10, 2015 · 2 comments
Assignees
Labels

Comments

@gngdb
Copy link
Member

gngdb commented Mar 10, 2015

Need a model that will quickly find a local optimum so we can change hyper-parameters and figure out what's holding our model back. Suspect a good way to do this is to use the same architecture we've been working with already (with extra convolutional layer) and turn down dropout in the final layers and reducing the rate of decay of the learning rate.

@gngdb gngdb added this to the GPU Training deadline milestone Mar 10, 2015
@gngdb gngdb added the ready label Mar 10, 2015
@gngdb gngdb self-assigned this Mar 10, 2015
@gngdb
Copy link
Member Author

gngdb commented Mar 10, 2015

Also would like to incorporate Pylearn2 extension to detect a lack of improvement and stop training automatically. We can probably use the MatchChannel termination criteria to do this.

@gngdb
Copy link
Member Author

gngdb commented Mar 10, 2015

Starting from Matt's quicker_learning_experiment.yaml, which already fulfils quite a few of the requirements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant