title | booktitle | year | volume | series | month | publisher | url | software | openreview | abstract | layout | issn | id | tex_title | firstpage | lastpage | page | order | cycles | bibtex_editor | editor | bibtex_author | author | date | address | container-title | genre | issued | extras | ||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FedLTF: Linear Probing Teaches Fine-tuning to Mitigate Noisy Labels in Federated Learning |
Proceedings of the 16th Asian Conference on Machine Learning |
2025 |
260 |
Proceedings of Machine Learning Research |
0 |
PMLR |
UqbWgyNiRh |
The presence of noisy labels has always been a primary factor affecting the effectiveness of federated learning (FL). Conventional FL approaches relying on Supervised Learning (SL) tend to overfit the noise labels, resulting in suboptimal Feature Extractor (FE). In this paper, we exploit models obtained in Self-Supervised Learning (SSL) to mitigate the impact of noisy labels in FL. In addition, we explore two popular methods to transfer to downstream tasks: linear probing, which updates only the last classification layers, and fine-tuning, which updates all model parameters. We empirically observe that, although fine-tuning typically yields higher accuracy than linear probing, in the presence of noise, it is very sensitive to noisy labels and will cause performance degradation. To achieve the best of both worlds (i.e., high accuracy and robustness against noisy labels), we “teach” fine-tuning to control overfitting. In particular, we leverage SSL to obtain a robust FE that is unaffected by noisy labels, and employ linear probing to train the classifiers. The FE and classifiers are integrated to construct a teacher model, which undergoes knowledge distillation to instruct the fine-tuning process of the student model. Extensive experimental evaluations conducted on multiple datasets demonstrate the effectiveness and robustness of our proposed framework against noisy labels in FL, outperforming state-of-the-art methods. |
inproceedings |
2640-3498 |
zhan25a |
{FedLTF}: {L}inear Probing Teaches Fine-tuning to Mitigate Noisy Labels in Federated Learning |
1048 |
1063 |
1048-1063 |
1048 |
false |
Nguyen, Vu and Lin, Hsuan-Tien |
|
Zhan, Shaojie and Yu, Lixing and Chen, Hanqi and Ji, Tianxi |
|
2025-01-14 |
Proceedings of the 16th Asian Conference on Machine Learning |
inproceedings |
|