Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
initial commit of the branch SSL_comparison #205
initial commit of the branch SSL_comparison #205
Changes from 1 commit
ca719bf
52fa724
1a1780f
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the medical domain
"while there is a lot of unlabeled data in data health records" → Awkward phrasing. Suggested: "while a large amount of unlabeled data exists in health records."
The first one trains classifiers jointly
the second is a two-stage approach
General comment : maybe you could gain in lisibility with bullet points for Semi and SSL such as :
Semi : blablabla introductive content
SSL: introductive content with two-stage approach
Suggestion of rephrasing of chatGPT (2ème review pour le prix d'une hihi):
In the medical domain, annotated data for deep learning is often scarce, while a large amount of unlabeled data exists in health records. Semi-supervised and self-supervised learning methods have been developed to leverage this unlabeled data and improve classification accuracy. The former trains classifiers jointly with two loss terms, while the latter follows a two-stage approach: first, learning deep representations, and second, fine-tuning the classifier. However, these two methods are rarely compared.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
benchmarks often don't consider this : Not very clear imo, it could be more explicit. Maybe you could add another sentence instead of the note in parentheses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Self-supervised | 0 then 1 | 1 then 0" → Could be clearer. Suggested: "Self-supervised | Pretraining (0) → Fine-tuning (1) | Pretraining (1) → Fine-tuning (0)."
I agree with Chatgpt on this point but i can understand it doesn't fit nicely, so up to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"creating new training samples by mixing two samples from the original dataset" → Suggested: "by linearly combining two training samples and their corresponding labels."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused by this subsection, I imagine this is both the methods and their notation for the compared supervised methods, maybe it is lacking an introductive sentence (which you did in the next subsections)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"sets the label to unlabeled data to the class..." → Suggested: "assigns labels to unlabeled data based on the class with the highest predicted probability."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
combine labeled and unlabeled data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"with a learning nonlinear transformation..." → Awkward phrasing. Suggested: "using a learnable nonlinear transformation..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"...but comparing different augmentation of the same images." → Correct: "...but compares different augmentations of the same image."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
taking the best using the validation set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"As there is a rough monotonic improvement in test performance..." → Suggested: " As a roughly monotonic improvement in test performance is observed..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MixMatch represents the best overall choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.