JAWS: Auditing Predictive Uncertainty Under Covariate ShiftDownload PDF

Published: 31 Oct 2022, Last Modified: 16 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: conformal prediction, uncertainty quantification, covariate shift, jackknife+, influence functions, error assessment, auditing
TL;DR: We propose JAWS, a series of wrapper methods for distribution-free uncertainty quantification under covariate shift, including: the jackknife+ with likelihood ratio weights; a computationally-efficient approximation; extensions to error assessment
Abstract: We propose \textbf{JAWS}, a series of wrapper methods for distribution-free uncertainty quantification tasks under covariate shift, centered on the core method \textbf{JAW}, the \textbf{JA}ckknife+ \textbf{W}eighted with data-dependent likelihood-ratio weights. JAWS also includes computationally efficient \textbf{A}pproximations of JAW using higher-order influence functions: \textbf{JAWA}. Theoretically, we show that JAW relaxes the jackknife+'s assumption of data exchangeability to achieve the same finite-sample coverage guarantee even under covariate shift. JAWA further approaches the JAW guarantee in the limit of the sample size or the influence function order under common regularity assumptions. Moreover, we propose a general approach to repurposing predictive interval-generating methods and their guarantees to the reverse task: estimating the probability that a prediction is erroneous, based on user-specified error criteria such as a safe or acceptable tolerance threshold around the true label. We then propose \textbf{JAW-E} and \textbf{JAWA-E} as the repurposed proposed methods for this \textbf{E}rror assessment task. Practically, JAWS outperform state-of-the-art predictive inference baselines in a variety of biased real world data sets for interval-generation and error-assessment predictive uncertainty auditing tasks.
Supplementary Material: pdf
24 Replies

Loading