Post-hoc estimators for learning to defer to an expertDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Jan 2023, 21:15NeurIPS 2022 AcceptReaders: Everyone
Keywords: learning to defer, adaptive inference
TL;DR: Existing losses for learning to defer to an expert may result in underfitting, which we resolve by a post-hoc scheme.
Abstract: Many practical settings allow a learner to defer predictions to one or more costly experts. For example, the learning to defer paradigm allows a learner to defer to a human expert, at some monetary cost. Similarly, the adaptive inference paradigm allows a base model to defer to one or more large models, at some computational cost. The goal in these settings is to learn classification and deferral mechanisms to optimise a suitable accuracy-cost tradeoff. To achieve this, a central issue studied in prior work is the design of a coherent loss function for both mechanisms. In this work, we demonstrate that existing losses have two subtle limitations: they can encourage underfitting when there is a high cost of deferring, and the deferral function can have a weak dependence on the base model predictions. To resolve these issues, we propose a post-hoc training scheme: we train a deferral function on top of a base model, with the objective of predicting to defer when the base model's error probability exceeds the cost of the expert model. This may be viewed as applying a partial surrogate to the ideal deferral loss, which can lead to a tighter approximation and thus better performance. Empirically, we verify the efficacy of post-hoc training on benchmarks for learning to defer and adaptive inference.
Supplementary Material: pdf
12 Replies

Loading