Skip to content

Commit

Permalink
Merge release 2.0.2 #573
Browse files Browse the repository at this point in the history
  • Loading branch information
ashleve committed May 2, 2023
2 parents 8919406 + 334271d commit 8886a13
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 6 deletions.
6 changes: 1 addition & 5 deletions configs/trainer/ddp.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,7 @@
defaults:
- default.yaml

# use "ddp_spawn" instead of "ddp",
# it's slower but normal "ddp" currently doesn't work ideally with hydra
# https://github.com/facebookresearch/hydra/issues/2070
# https://pytorch-lightning.readthedocs.io/en/latest/accelerators/gpu_intermediate.html#distributed-data-parallel-spawn
strategy: ddp_spawn
strategy: ddp

accelerator: gpu
devices: 4
Expand Down
2 changes: 1 addition & 1 deletion src/models/mnist_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ def on_validation_epoch_end(self):
self.val_acc_best(acc) # update best so far val acc
# log `val_acc_best` as a value through `.compute()` method, instead of as a metric object
# otherwise metric would be reset by lightning after each epoch
self.log("val/acc_best", self.val_acc_best.compute(), prog_bar=True)
self.log("val/acc_best", self.val_acc_best.compute(), sync_dist=True, prog_bar=True)

def test_step(self, batch: Any, batch_idx: int):
loss, preds, targets = self.model_step(batch)
Expand Down

0 comments on commit 8886a13

Please sign in to comment.