Keywords: Learning to Defer; Conformal Prediction;Uncertain Quantification; Bayesian Probability
TL;DR: We perform uncertainty quantification for the rejector sub-component of the Learning-to-Defer framework, finding improvements over the traditional method.
Abstract: Learning to defer (L2D) allows prediction tasks to be allocated to a human or machine decision maker, thus getting the best of both’s abilities. Yet this allocation decision depends on a'rejector’ function, which could be poorly fit or otherwise misspecified. In this work, we perform uncertainty quantification for the rejector subcomponent of the L2D framework. We use conformal prediction to allow the reject to output sets instead of just the binary outcome of ‘defer’ or not. On tasks ranging from object to hate speech detection, we demonstrate that the uncertainty in the rejector translates to safer decisions via two forms of selective prediction.
Submission Number: 122
Loading