-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify the Dice loss #376
base: main
Are you sure you want to change the base?
Modify the Dice loss #376
Conversation
@zifuwanggg Thanks for your commit, it's beneficial for me. But after I used your code, neither (_hard_label, hard_label) nor (_soft_label, soft_label) got a result close to 0. I would like to know how I can change it.
Its output:
|
@z-jiaming, thanks for the question.
|
@zifuwanggg Thanks for your reply!
only the |
@z-jiaming, sorry I made a mistake.
Now the loss should work as expected. |
Thanks a lot!!!! |
The Dice loss in
training.loss_fns
is modified based on JDTLoss and segmentation_models.pytorch.The original Dice loss is incompatible with soft labels. For example, with a ground truth value of 0.5 for a single pixel, it is minimized when the predicted value is 1, which is clearly erroneous. To address this, the intersection term is rewritten as$\frac{|x|_1 + |y|_1 - |x-y|_1}{2}$ . This reformulation has been proven to retain equivalence with the original version when the ground truth is binary (i.e. one-hot hard labels). Moreover, since the new version is minimized if and only if the prediction is identical to the ground truth, even when the ground truth include fractional numbers, it resolves the issue with soft labels [1, 2].
Although the original SAM/SAM2 models were trained without soft labels, this modification enables soft label training for downstream fine-tuning without changing the existing behavior.
Example
References
[1] Dice Semimetric Losses: Optimizing the Dice Score with Soft Labels. Zifu Wang, Teodora Popordanoska, Jeroen Bertels, Robin Lemmens, Matthew B. Blaschko. MICCAI 2023.
[2] Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels. Zifu Wang, Xuefei Ning, Matthew B. Blaschko. NeurIPS 2023.