When Does Confidence-Based Cascade Deferral Suffice?

Published: 21 Sept 2023, Last Modified: 02 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: cascades, deferral rules, adaptive computation, model confidence
TL;DR: We identify conditions under which confidence-based cascades may suceed or fail, present a theoretically optimal deferral rule, and study post hoc deferral schemes to improve upon confidence-base deferral.
Abstract: Cascades are a classical strategy to enable inference cost to vary adaptively across samples, wherein a sequence of classifiers are invoked in turn. A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction. One simple deferral rule employs the confidence of the current classifier, e.g., based on the maximum predicted softmax probability. Despite being oblivious to the structure of the cascade --- e.g., not modelling the errors of downstream models --- such confidence-based deferral often works remarkably well in practice. In this paper, we seek to better understand the conditions under which confidence-based deferral may fail, and when alternate deferral strategies can perform better. We first present a theoretical characterisation of the optimal deferral rule, which precisely characterises settings under which confidence-based deferral may suffer. We then study post-hoc deferral mechanisms, and demonstrate they can significantly improve upon confidence-based deferral in settings where (i) downstream models are specialists that only work well on a subset of inputs, (ii) samples are subject to label noise, and (iii) there is distribution shift between the train and test set.
Supplementary Material: pdf
Submission Number: 4764
Loading