-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: ✨ ClearML training loss logging #1844
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks :)
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1844 +/- ##
==========================================
+ Coverage 96.58% 96.61% +0.02%
==========================================
Files 165 165
Lines 7940 7940
==========================================
+ Hits 7669 7671 +2
+ Misses 271 269 -2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Sorry, weren't there plans to make logging more universal? |
Hi @kuraga 👋🏼, What do you mean with other loggers ? |
@felixdittrich92 Hm... Is I like a lightweight lazyscribe BTW... I don't ask "to implement other loggers" but about some abstract layer... |
Ah got it I was thinking the question is related to something with ClearML 😅 For me personally W&B is still the answer xD But yeah everyone what they prefer Mh yes I understood the request ..the thing is that the training scripts / references folder is more a "collection" of scripts everyone can modify to there own needs which has some advantages but on the other side also disadvantages (for example adding globally avaialble features shared between the different training tasks like logging, early stopping, and so on) with an |
Aaahh, didn't catch it. My fault. Training of docTR is Mindee's inner job at most. Thanks, @felixdittrich92 :) P.S. Add READMEs to |
Mh not directly we use the same scripts for pre-training all the models but users can use it also for fine-tuning on there own needs / own datasets :) The inner task specific folders have already README's :) |
And the corresponding doc: https://mindee.github.io/doctr/latest/using_doctr/custom_models_training.html |
Well, if the mechanism is public I'll illustrate my
: the |
Update: hhm, now I see |
Is there any reason why docTR don't output the Training loss log to WB? How can i tell if the model is overfit or underfit? |
Oh you are right we should do this for every train script the same way it's done with the |
@ThanhNX0401 Fixed now on main branch 😊 |
No description provided.