You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Mixing the two logging methods by calling self.log("val", self.metric) in {training|validation|test}_step method and then calling self.log("val", self.metric.compute()) in the corresponding on_{train|validation|test}_epoch_end method.
Because the object is logged in the first case, Lightning will reset the metric before calling the second line leading to errors or nonsense results.
Therefore, isn't the above snippet a "bad" practice, since the metric is reset before we call the second line, i.e. before calling:
self.log('train_acc_epoch', self.accuracy)
Moreover, since the snippet shows how to use a metric within a LightningModule, wouldn't be better to adhere to the Lightning's automatic logging?
classMyModel(LightningModule):
def__init__(self, num_classes):
...
self.accuracy=torchmetrics.classification.Accuracy(task="multiclass", num_classes=num_classes)
deftraining_step(self, batch, batch_idx):
x, y=batchpreds=self(x)
...
# log step + epoch metricself.accuracy(preds, y)
self.log('train_acc', self.accuracy, on_epoch=True)
# Automatically logs at the end of each epoch.# Two keys, 'train_acc_step' and 'train_acc_epoch'.
...
# def on_train_epoch_end(self):# # log step metric# self.log('train_acc_epoch', self.accuracy)
The text was updated successfully, but these errors were encountered:
The first snippet in the documentation of TorchMetrics in PyTorch Lightning is the following:
However, on the Common Pitfalls it is stated that:
Therefore, isn't the above snippet a "bad" practice, since the metric is reset before we call the second line, i.e. before calling:
Moreover, since the snippet shows how to use a metric within a LightningModule, wouldn't be better to adhere to the Lightning's automatic logging?
The text was updated successfully, but these errors were encountered: