Approximate Conditional Coverage via Neural Model ApproximationsDownload PDF

Published: 01 Feb 2023, Last Modified: 25 Nov 2024Submitted to ICLR 2023Readers: Everyone
Keywords: distribution-free uncertainty quantification, split-conformal prediction sets, Venn Predictors
TL;DR: We construct prediction sets over Transformer networks, via KNN-based approximations, obtaining reliable assumption- and parameter-light approximate conditional coverage.
Abstract: We propose a new approach for constructing prediction sets for Transformer networks via the strong signals for prediction reliability from KNN-based approximations. This enables a data-driven partitioning of the high-dimensional feature space and a new Inductive Venn Predictor for calibration, the Venn-ADMIT Predictor. Our approach more closely obtains approximate conditional coverage than recent work proposing adaptive and localized conformal score functions for deep networks. We analyze coverage on several representative natural language processing classification tasks, including class-imbalanced and distribution-shifted settings.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/approximate-conditional-coverage-via-neural/code)
13 Replies

Loading