Adversarially Robust Conformal PredictionDownload PDF

29 Sept 2021, 00:30 (edited 16 Feb 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Conformal Prediction, Adversarial Robustness, Randomized Smoothing, Uncertainty Estimation, Calibration
  • Abstract: Conformal prediction is a model-agnostic tool for constructing prediction sets that are valid under the common i.i.d. assumption, which has been applied to quantify the prediction uncertainty of deep net classifiers. In this paper, we generalize this framework to the case where adversaries exist during inference time, under which the i.i.d. assumption is grossly violated. By combining conformal prediction with randomized smoothing, our proposed method forms a prediction set with finite-sample coverage guarantee that holds for any data distribution with $\ell_2$-norm bounded adversarial noise, generated by any adversarial attack algorithm. The core idea is to bound the Lipschitz constant of the non-conformity score by smoothing it with Gaussian noise and leverage this knowledge to account for the effect of the unknown adversarial perturbation. We demonstrate the necessity of our method in the adversarial setting and the validity of our theoretical guarantee on three widely used benchmark data sets: CIFAR10, CIFAR100, and ImageNet.
  • One-sentence Summary: Multi-class calibration procedure that is provably robust to adversarial attacks
  • Supplementary Material: zip
15 Replies