Skip to content

Latest commit

 

History

History
66 lines (66 loc) · 2.86 KB

2025-01-14-shah25a.md

File metadata and controls

66 lines (66 loc) · 2.86 KB
title booktitle year volume series month publisher pdf url software openreview abstract layout issn id tex_title firstpage lastpage page order cycles bibtex_editor editor bibtex_author author date address container-title genre issued extras
Towards Calibrated Losses for Adversarial Robust Reject Option Classification
Proceedings of the 16th Asian Conference on Machine Learning
2025
260
Proceedings of Machine Learning Research
0
PMLR
lx046w4JHs
Robustness towards adversarial attacks is a vital property for classifiers in several applications such as autonomous driving, medical diagnosis, etc. Also, in such scenarios, where the cost of misclassification is very high, knowing when to abstain from prediction becomes crucial. A natural question is which surrogates can be used to ensure learning in scenarios where the input points are adversarially perturbed and the classifier can abstain from prediction? This paper aims to characterize and design surrogates calibrated in "Adversarial Robust Reject Option" setting. First, we propose an adversarial robust reject option loss $\ell_{d}^{\gamma}$ and analyze it for the hypothesis set of linear classifiers $\mathcal{H_\text{lin}}$. Next, we provide a complete characterization result for any surrogate to be $(\ell_{d}^{\gamma},\mathcal{H_{\text{lin}}})$- calibrated. To demonstrate the difficulty in designing surrogates to $\ell_{d}^{\gamma}$, we show negative calibration results for convex surrogates and quasi-concave conditional risk cases (these gave positive calibration in adversarial setting without reject option). We also empirically argue that Shifted Double Ramp Loss (DRL) and Shifted Double Sigmoid Loss (DSL) satisfy the calibration conditions. Finally, we demonstrate the robustness of shifted DRL and shifted DSL against adversarial perturbations on a synthetically generated dataset.
inproceedings
2640-3498
shah25a
Towards Calibrated Losses for Adversarial Robust Reject Option Classification
1256
1271
1256-1271
1256
false
Nguyen, Vu and Lin, Hsuan-Tien
given family
Vu
Nguyen
given family
Hsuan-Tien
Lin
Shah, Vrund and Chaudhari, Tejas Kiran and Manwani, Naresh
given family
Vrund
Shah
given family
Tejas Kiran
Chaudhari
given family
Naresh
Manwani
2025-01-14
Proceedings of the 16th Asian Conference on Machine Learning
inproceedings
date-parts
2025
1
14