Keywords: Model Calibration; Uncertainty Quantification; Conformal Prediction
TL;DR: We propose a calibration framework for prediction set size of conformal prediction to make the prediction set not only safisfy the coverage but also align with accuracy.
Abstract: Given its flexibility and low computation, conformal prediction (CP) has become one of the most popular uncertainty quantification methods in recent years. In deep classifiers, CP will generate a prediction set for a test sample that satisfies the $(1-\alpha)$ coverage guarantee. The prediction set size (PSS) is then considered a reflection of the predictive uncertainty. However, it is unknown whether the predictive uncertainty of CP is aligned with its predictive correctness, which is an imperative property for predictive uncertainty. This work answers this open question by investigating the uncertainty calibration of CP in deep classifiers. We first give a definition for the uncertainty calibration of CP by building a connection between PSS and prediction accuracy and then propose a calibration target for CP based on a theoretical analysis of the predictive distributions. Given this defined CP calibration, we present an empirical study on several classification datasets and reveal their weak calibration of CP. To strengthen the calibration of CP, we propose CP-aware calibration (CPAC), a bi-level optimization algorithm, and demonstrate the effectiveness of CPAC on several standard classification datasets by testing models including ResNet, Vision Transformer and GPT-2.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 24672
Loading