Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model Reliability
TL;DR: This paper proposes an adversarial attack to maximize uncertainty and uses adversarial training to enhance robustness.
Abstract: As deep learning models are increasingly deployed in high-risk applications, robust defenses against adversarial attacks and reliable performance guarantees become paramount. Moreover, accuracy alone does not provide sufficient assurance or reliable uncertainty estimates for these models. This study advances adversarial training by leveraging principles from Conformal Prediction. Specifically, we develop an adversarial attack method, termed OPSA (OPtimal Size Attack), designed to reduce the efficiency of conformal prediction at any significance level by maximizing model uncertainty without requiring coverage guarantees. Correspondingly, we introduce OPSA-AT (Adversarial Training), a defense strategy that integrates OPSA within a novel conformal training paradigm. Experimental evaluations demonstrate that our OPSA attack method induces greater uncertainty compared to baseline approaches for various defenses. Conversely, our OPSA-AT defensive model significantly enhances robustness not only against OPSA but also other adversarial attacks, and maintains reliable prediction. Our findings highlight the effectiveness of this integrated approach for developing trustworthy and resilient deep learning models for safety-critical domains. Our code is available at https://github.com/bjbbbb/Enhancing-Adversarial-Robustness-with-Conformal-Prediction.
Lay Summary: This research exploits an important but underexplored topic in deep learning adversarial attacks: targeting conformal prediction set size rather than classification accuracy. We develop a novel conformal training framework that uses hard quantile thresholds instead of smoothed approximations, which can be inaccurate, serving dual purposes of identifying the strongest attacks and training the most robust defensive models. Our approach includes OPSA (Optimal Size Attack), which maximizes uncertainty by enlarging prediction sets without knowing the defender's confidence requirements, and OPSA-AT (Adversarial Training), a defense strategy that leverages our hard quantile framework to train models resistant to uncertainty-maximizing attacks while maintaining reliable prediction coverage and compact prediction sets. Experiments on standard image datasets demonstrate that our attack method generates significantly larger prediction sets than existing approaches, while our defense method produces more compact, reliable prediction sets compared to baselines. This work is particularly valuable for safety-critical applications like autonomous driving and medical diagnosis, where both robustness against attacks and reliable uncertainty quantification are essential.
Link To Code: https://github.com/bjbbbb/Enhancing-Adversarial-Robustness-with-Conformal-Prediction
Primary Area: General Machine Learning
Keywords: adversarial attack, game theory, adversarial training, conformal prediction, conformal training
Submission Number: 10372
Loading