You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using litgpt to pretrain Llama3.1-70b with 4 nodes each having 8 h100 Gpus but still I am getting Cuda out of memory error. Also what pytorch version should I use? I am using global batch_size 8, and mini batch size 1
Any suggestions for this case?
The text was updated successfully, but these errors were encountered:
I am using litgpt to pretrain Llama3.1-70b with 4 nodes each having 8 h100 Gpus but still I am getting Cuda out of memory error. Also what pytorch version should I use? I am using global batch_size 8, and mini batch size 1
Any suggestions for this case?
The text was updated successfully, but these errors were encountered: