-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to reproduce the results from paper #3
Comments
I guess you are using unexpected versions of the libraries.
|
I've tried the exact versions of libraries. Using these libraries it is required to do some minor change in the code, because the Chainer API is different. But anyway, on Windows multiple workers are not working. I'm trying to reproduce the results on Linux. I'll update this issue later today. |
I've tried on Linux, after 100k steps the reward still below zero, like -5 or something near. Should it go above zero? How many steps do you do during training? |
The reward should become larger than at the start of the training but does not need to become larger than zero. I trained the model for 96000 steps, which is the default value in To reproduce the paper's results, please try training and testing the model as described in python train.py settings/photo_enhancement.yaml logs and to test the model, python test.py settings/photo_enhancement.yaml logs --result_dir logs/20200115T223451.986831/96000_finish/test_results --load logs/20200115T223451.986831/96000_finish/ where |
Hi,
I'm trying to reproduce the results from paper and the reward never goes above zero.
Could you please let me know if it is possible to reproduce the results with one worker. I'm training on Windows and Chainer is not working on more than one worker.
Many thanks
Eugene
The text was updated successfully, but these errors were encountered: