You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, great work!
I tried running the training code with 4090 GPUs that has 24GB memory. Even with the batch size set to 1, it exceeded the available memory. Can you confirm if this is the case? And how much Memory does the A100 GPU that you used?
Thank you very much!
The text was updated successfully, but these errors were encountered:
Thank you very much. We trained our model on A100 GPUs with 40GB memory and the batch size per GPU is 6. When training with a batch size of 1, the memory usage is ~8GB, which should be able to fit in a GPU with 24GB memory. How did you update the batch size? It is via the configuration file, or in genie/config.py? Note that batch_size in genie/config.py would be overwritten by batchSize from the configuration file (i.e. runs/example/configuration).
Hi, great work!
I tried running the training code with 4090 GPUs that has 24GB memory. Even with the batch size set to 1, it exceeded the available memory. Can you confirm if this is the case? And how much Memory does the A100 GPU that you used?
Thank you very much!
The text was updated successfully, but these errors were encountered: