Keywords: online learning, conformal prediction, adversarial Bayes
TL;DR: We propose a Bayesian approach to adversarial online conformal prediction, which, due to its "data-centric" nature, improves upon existing "iterate-centric" first-order optimization baselines.
Abstract: Based on the framework of Conformal Prediction (CP), we study the online construction of confidence sets given a black-box machine learning model. By converting the target confidence levels into quantile levels, the problem can be reduced to predicting the quantiles (in hindsight) of a sequentially revealed data sequence. Two very different approaches have been studied previously:
- Assuming the data sequence is iid or exchangeable, one could maintain the empirical distribution of the observed data as an algorithmic belief, and directly predict its quantiles.
- Due to the fragility of statistical assumptions, a recent trend is to consider the adversarial setting and apply first-order online optimization to moving quantile losses. It requires the oracle knowledge of the target quantile level, and suffers from a previously overlooked monotonicity issue due to the associated loss linearization.
This paper presents a CP algorithm that combines their strengths. Without any statistical assumption, it is able to answer multiple arbitrary confidence level queries with low regret, while also overcoming the monotonicity issue suffered by first-order optimization baselines. Furthermore, if the data sequence is indeed iid, then the same algorithm is automatically equipped with the "correct" coverage probability guarantee.
From a technical perspective, our key idea is to regularize the aforementioned algorithmic belief (the empirical distribution) by a Bayesian prior, which robustifies it by simulating a non-linearized Follow the Regularized Leader (FTRL) algorithm on the output. Such a belief update backbone is shared by prediction heads targeting different confidence levels, bringing practical benefits analogous to the recently proposed concept of U-calibration (Kleinberg et al., 2023).
Submission Number: 29
Loading