How can I numerically determine if my model is over- or under-trained? #410
Unanswered
FugueSegue
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I see that "loss" is output to the CLI as the model is being trained. I think I understand what the term means. After training is done is there a way to review exactly what those loss numbers were at each step of the training? Is there a file where this is recorded?
Is there any exact way to determine if a model is over-trained? Just judging test renders as "not looking right" seems too subjective.
This webpage discusses "validation loss" and "training loss" as metrics for determining over- or under-training.
https://www.baeldung.com/cs/training-validation-loss-deep-learning
I have no idea if one, both, or neither terms in that context can apply to stable diffusion model training. It's just a webpage I found in my quest to understand how to properly train a model.
Beta Was this translation helpful? Give feedback.
All reactions