Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculate Metrics in callbacks #2

Open
sbatchelder opened this issue Dec 30, 2024 · 0 comments
Open

Calculate Metrics in callbacks #2

sbatchelder opened this issue Dec 30, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@sbatchelder
Copy link
Collaborator

There's duplicate code in different LightningModules that calculates metrics.
To avoid this metric setup, update, logging, reset can be moved to a lightning callback which can then be used by different modules.

The callback should have an init where you specify what metrics you want to track: f1, recall precision, accuracy, ROC-AUC, PR-AUC x averaging micro, macro, weighted, per-class.
Macro averaging should happen on the full validation dataset, not minibatched like it currently is.
By tracking validation_step outputs and not relying on on_validation_epoch_end results for preds we can address #1 as well

One wrinkle is, how do we manage plot callbacks. Many of them rely on averaging=None metrics. Do they become helper functions to this one main metrics callback? That doesn't seem right. I suppose they should also use aggregate validation_step outputs.

@sbatchelder sbatchelder added the enhancement New feature or request label Dec 30, 2024
@sbatchelder sbatchelder self-assigned this Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant