Trust Me, I’m Calibrated: Robustifying Deep Networks

TMLR Paper4683 Authors

16 Apr 2025 (modified: 22 Apr 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The tremendous success of deep neural networks (DNNs) in solving a wide range of complex computer vision tasks has paved the way for their deployment in real-world applications. However, challenges arise when these models are exposed to natural adversarial corruptions that can occur in unconstrained physical environments. Such corruptions are inherently present in the real world and can significantly degrade model performance by causing incorrect predictions. This vulnerability is further enhanced by the miscalibration of modern DNNs, where models tend to output incorrect predictions with high confidence. To ensure safe and reliable deployment, it is crucial to calibrate these models correctly. While existing literature primarily focuses on calibrating DNNs, it often overlooks the impact of adversarial corruption. Thus, substantial scope remains to explore how calibration techniques interact with adversarial robustness and whether improving calibration can increase robustness to corrupted or adversarial data. In this work, we aim to address this gap by employing uncertainty quantification methods to improve the calibration and robustness of DNNs and Transformer-based models against adversarial data.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Venkatesh_Babu_Radhakrishnan2
Submission Number: 4683
Loading