Powerful batch conformal prediction for classification

Published: 22 Jan 2025, Last Modified: 11 Mar 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

In a split conformal framework with $K$ classes, a calibration sample of $n$ labeled examples is observed for inference on the label of a new unlabeled example. We explore the setting where a 'batch' of $m$ independent such unlabeled examples is given, and the goal is to construct a batch prediction set with 1-$\alpha$ coverage. Unlike individual prediction sets, the batch prediction set is a collection of label vectors of size $m$, while the calibration sample consists of univariate labels. A natural approach is to apply the Bonferroni correction, which concatenates individual prediction sets at level $1-\alpha/m$. We propose a uniformly more powerful solution, based on specific combinations of conformal $p$-values that exploit the Simes inequality. We provide a general recipe for valid inference with any combinations of conformal $p$-values, and compare the performance of several useful choices. Intuitively, the pooled evidence of relatively `easy' examples within the batch can help provide narrower batch prediction sets. Additionally, we introduce a more computationally intensive method that aggregates batch scores and can be even more powerful. The theoretical guarantees are established when all examples are independent and identically distributed (iid), as well as more generally when iid is assumed only conditionally within each class. Notably, our results remain valid under label distribution shift, since the distribution of the labels need not be the same in the calibration sample and in the new batch. The effectiveness of the methods is highlighted through illustrative synthetic and real data examples.

Submission Number: 1180
Loading