Improving the efficiency of conformal predictors via test-time augmentation

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: uncertainty estimation, conformal prediction, data augmentation, image classification, test-time
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Combining learned test-time augmentation policies with conformal predictors produces prediction sets that are up to 30% smaller, with no loss of coverage.
Abstract: In conformal classification, the goal is to output a _set_ of predicted classes, accompanied by a probabilistic guarantee that the set includes the true class. Conformal approaches have gained widespread traction across domains because they can be composed with existing classifiers to generate predictions with probabilistically valid uncertainty estimates. In practice, however, the utility of conformal prediction is limited by its tendency to yield large prediction sets. We study this phenomenon and provide insights into why large set sizes persist, even for conformal methods designed to produce small sets. Using these insights, we propose a method to reduce prediction set size while maintaining coverage. We use test-time augmentation to replace a classifier's predicted probabilities with probabilites aggregated over a set of augmentations. Our approach is flexible, computationally efficient, and effective. It can be combined with any conformal score, requires no model retraining, and reduces prediction set sizes by up to 30\%. We conduct an evaluation of the approach spanning three datasets, three models, two established conformal scoring methods, and multiple coverage values to show when and why test-time augmentation is a useful addition to the conformal pipeline.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6838
Loading