Controlling Coverage of Uncertainty Sets for Batch Evaluation via Vanilla Conformal Prediction

TMLR Paper6757 Authors

02 Dec 2025 (modified: 30 Jan 2026)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Conformal prediction (CP) provides provable coverage guarantees over uncertainty sets for any given black-box predictive model. The standard split CP guarantees that for a single test input, the uncertainty set contains the true output with a user-specified probability $1 - \alpha$ (say 90\%). However, in many real-world applications, practitioners evaluate the predictive model on a batch of test inputs after calibration on a fixed set. The marginal coverage guarantee of split CP does not say anything directly about the realized false-coverage proportion (FCP) across a batch of inputs. This paper develops a simple and effective approach referred to as {\em Probably Approximately Correct FCP (PAC-FCP)}. PAC-FCP leverages the key insight that FCP over a batch of test inputs from split CP follows a Beta-Binomial distribution and inverts the Beta–Binomial tail to find the minimum level to produce a guarantee around FCP using vanilla CP methods. We provide theoretical analysis for the validity and effectiveness of PAC-FCP using prior theoretical results. Our experimental results on 17 OpenML benchmarks for regression and ImageNet data for classification, demonstrate that PAC-FCP achieves the specified FCP rate with smaller prediction sets/intervals.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: All changes since the last submission are highlighted in red in the revised manuscript (the PDF file).
Assigned Action Editor: ~Michele_Caprio1
Submission Number: 6757
Loading