You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Section 3.3.2 in the Llama 3.1 paper https://arxiv.org/pdf/2407.21783 says that Llama 3.1 was trained with FSDP on the parameters, gradients, and optimizer states. However, it also says that the parameters were not re-sharded for the backward pass to avoid another all gather reduction. Doesn't this mean that each DP rank needs to have enough memory to hold the entire model's parameters? If so, then why bother sharding parameters for the forward pass if you need enough memory to hold the whole model for the backward pass?
The text was updated successfully, but these errors were encountered:
Section 3.3.2 in the Llama 3.1 paper https://arxiv.org/pdf/2407.21783 says that Llama 3.1 was trained with FSDP on the parameters, gradients, and optimizer states. However, it also says that the parameters were not re-sharded for the backward pass to avoid another all gather reduction. Doesn't this mean that each DP rank needs to have enough memory to hold the entire model's parameters? If so, then why bother sharding parameters for the forward pass if you need enough memory to hold the whole model for the backward pass?
The text was updated successfully, but these errors were encountered: