Early-Exit Neural Networks with Nested Prediction Sets

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: early-exit neural networks, uncertainty quantification, anytime-valid confidence sequence
TL;DR: We use anytime-valid confidence sequences to ensure that uncertainty estimates are consistent across exits in early-exit neural networks.
Abstract: Early-exit neural networks (EENNs) facilitate adaptive inference by producing predictions at multiple stages of the forward pass. In safety-critical applications, these predictions are only meaningful when complemented with reliable uncertainty estimates. Yet, due to their sequential structure, an EENN's uncertainty estimates should also be *consistent*: labels that are deemed improbable at one exit should not reappear within the confidence interval / set of later exits. We show that standard uncertainty quantification techniques, like Bayesian methods or conformal prediction, can lead to inconsistency across exits. We address this problem by applying anytime-valid confidence sequences (AVCSs) to the exits of EENNs. By design, AVCSs maintain consistency across exits. We examine the theoretical and practical challenges of applying AVCSs to EENNs and empirically validate our approach on both regression and classification tasks.
List Of Authors: Jazbec, Metod and Forre, Patrick and Mandt, Stephan and Zhang, Dan and Nalisnick, Eric
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/metodj/EENN-AVCS
Submission Number: 171