Keywords: Conformal prediction, online learning, adversarial Bayes
TL;DR: We propose a Bayesian approach to adversarial online conformal prediction, which, due to its "data-centric" nature, improves upon existing "iterate-centric" first-order optimization baselines.
Abstract: Based on the framework of *Conformal Prediction* (CP), we study the online construction of valid confidence sets given a black-box machine learning model. Converting the targeted confidence levels to quantile levels, the problem can be reduced to predicting the quantiles (in hindsight) of a sequentially revealed data sequence, where existing results can be divided into two types.
- Assuming the data sequence is iid, one could maintain the empirical distribution of the observed data as an algorithmic belief, and directly predict its quantiles.
- As the iid assumption is often violated in practice, a recent trend is to apply first-order online optimization on moving quantile losses (Gibbs & Candes, 2021). This indirect approach requires knowing the targeted quantile level beforehand, and suffers from certain validity issues on the obtained confidence sets, due to the associated loss linearization.
This paper presents a Bayesian approach that combines their strengths. Without any statistical assumption, it is able to both
- answer multiple arbitrary confidence level queries online, with provably low regret; and
- overcome the validity issues suffered by first-order optimization baselines, due to being "data-centric" rather than "iterate-centric".
From a technical perspective, our key idea is to take the above iid-based procedure and regularize its algorithmic belief by a Bayesian prior, which "robustifies" it by simulating a non-linearized *Follow the Regularized Leader* (FTRL) algorithm on the output. For statisticians, this can be regarded as an online adversarial view of Bayesian nonparametric distribution estimation. Importantly, the proposed belief update backbone is shared by "prediction heads" targeting different confidence levels, bringing practical benefits similar to U-calibration (Kleinberg et al., 2023).
Submission Number: 51
Loading