TL;DR: We propose an alternative to conformal prediction based on Bayesian quadrature that produces a distribution over test-time risk.
Abstract: As machine learning-based prediction systems are increasingly used in high-stakes situations, it is important to understand how such predictive models will perform upon deployment. Distribution-free uncertainty quantification techniques such as conformal prediction provide guarantees about the loss black-box models will incur even when the details of the models are hidden. However, such methods are based on frequentist probability, which unduly limits their applicability. We revisit the central aspects of conformal prediction from a Bayesian perspective and thereby illuminate the shortcomings of frequentist guarantees. We propose a practical alternative based on Bayesian quadrature that provides interpretable guarantees and offers a richer representation of the likely range of losses to be observed at test time.
Lay Summary: Machine learning can be used to make predictions in high-stakes settings. In these settings, we want to decide if we should use a particular prediction algorithm or not. We can first measure the performance of the algorithm on some data. Then we can estimate whether the algorithm is appropriate based on this. Previous methods to do this estimation have two main issues. Some require strong assumptions about the algorithm itself. Others produce only a single estimate rather than a range of possibilities. We show how to produce a range of plausible estimates without strong assumptions. In this way, we can better decide whether we can safely use the algorithm.
Link To Code: https://github.com/jakesnell/conformal-as-bayes-quad
Primary Area: Probabilistic Methods->Bayesian Models and Methods
Keywords: bayesian quadrature, probabilistic numerics, conformal prediction, distribution-free uncertainty quantification
Submission Number: 11432
Loading