You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I run the train with "python -m torch.distributed.launch --nproc_per_node=4"
and using dataset=cityscape backbone=resnet50 batchsize=4
my GPU is Nvidia Titan X *4 12GB memory per card,
and CUDA out of memory occured. No idea.
then I set batchsize to 2, this time, no OOM, but showed:
[if torch.set_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 0: 165 166
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error]
Could you help me to figure out why?
The text was updated successfully, but these errors were encountered:
I run the train with "python -m torch.distributed.launch --nproc_per_node=4"
and using dataset=cityscape backbone=resnet50 batchsize=4
my GPU is Nvidia Titan X *4 12GB memory per card,
and CUDA out of memory occured. No idea.
then I set batchsize to 2, this time, no OOM, but showed:
[if torch.set_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
, and by making sure allforward
function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module'sforward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 0: 165 166In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error]
Could you help me to figure out why?
The text was updated successfully, but these errors were encountered: