-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MemoryError on doomrnn #19
Comments
Hi @zuoanqh Thanks for the issue. It's due more to laziness on my part, rather than actual requirements to train the VAE. When I was running the experiments I was doing them on virtual cloud instances that had GPUs, 64-core CPUs and a few hundred GBs of RAM, so I was lazy and just loaded the entire dataset into a numpy array (as you have outlined: If you want to train the VAE with very little RAM, feel free to refactor the code using the more modern Here are a few tutorials on how to do use https://towardsdatascience.com/how-to-use-dataset-in-tensorflow-c758ef9e4428 https://www.tensorflow.org/guide/datasets Best. |
I am encountering what appears to be an error where the program is using too much memory:
When I load 500 episodes, the program runs fine and VAE gets trained and loss decreases.
When I load 2000 episodes, I get the following:
The repo uses 10k episodes, but I cannot load even 2k on my 16GB machine. Am I missing something? If my memory really is the issue here, what amount of memory is necessary to replicate the paper with the codes here?
The text was updated successfully, but these errors were encountered: