Keywords: Conformal Prediction, Algorithmic Stability, Regularized Loss Minimization, Stochastic Gradient Descent
Abstract: Conformal prediction (CP) is an important tool for distribution-free predictive uncertainty quantification.
Yet, a major challenge is to balance computational efficiency and prediction accuracy, particularly for multiple predictions.
We propose **L**eave-**O**ne-**O**ut **Stab**le **C**onformal **P**rediction (LOO-StabCP), a novel method to speed up full conformal using algorithmic stability without sample splitting.
By leveraging *leave-one-out* stability, our method is much faster in handling a large number of prediction requests compared to existing method RO-StabCP based on *replace-one* stability.
We derived stability bounds for several popular machine learning tools: regularized loss minimization (RLM) and stochastic gradient descent (SGD), as well as kernel method, neural networks and bagging.
Our method is theoretically justified and demonstrates superior numerical performance on synthetic and real-world data.
We applied our method to a screening problem, where its effective exploitation of training data led to improved test power compared to state-of-the-art method based on split conformal.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4509
Loading