Truncated Conformal Prediction: A Sparsity-Aware Framework for Classification

ICLR 2026 Conference Submission14805 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Conformal prediction, Classification, Sparsity prior, Truncation
TL;DR: A truncation-normalization framework to incorporate the sparsity prior into CP algorithm on the classification task to improve efficiency.
Abstract: Conformal prediction (CP) is a distribution-free method for uncertainty quantification that transforms any point estimator into a set predictor. While there are many ways to enhance the efficiency of CP, an important yet underexplored way is to incorporate domain prior knowledge into the algorithm. In this paper, we focus on leveraging a sparsity prior into CP algorithm for classification tasks. Specifically, the probability simplex often exhibits a sparsity structure in large-scale classification tasks. However, existing classifiers typically include a softmax layer that diminishes this sparsity prior. To address this issue, we propose a truncation-normalization operator that uses a sparsity prior in CP, thereby improving efficiency. Both theoretical and empirical results reveal the following insights: (i) the U-shaped relation between set size and truncation level ensures the existence of a nonzero optimal truncation level; (ii) the oracle set could be recovered by choosing the optimal truncation level, which is unattainable without truncation; and (iii) optimal truncation level correlates positively with model quality.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 14805
Loading