Keywords: calibration, decision-making
Abstract: In many applications, decision-makers must choose between multiple predictive
models that may all be miscalibrated. Which model (i.e., predictor) is more
“useful” in downstream decision tasks? To answer this, our first contribution
introduces the notion of the informativeness gap between any two predictors,
defined as the maximum normalized payoff advantage one predictor offers over the
other across all decision-making tasks. Our framework strictly generalizes several
existing notions: it subsumes U-Calibration (Kleinberg et al., 2023) and Calibration
Decision Loss (Hu and Wu, 2024), which compare a miscalibrated predictor to its
calibrated counterpart, and it recovers Blackwell informativeness (Blackwell, 1951,
1953) as a special case when both predictors are perfectly calibrated. Our second
contribution is a dual characterization of the informativeness gap, which gives rise
to a natural informativeness measure that can be viewed as a relaxed variant of the
earth mover’s distance (EMD) between two prediction distributions. We show that
this measure satisfies natural desiderata: it is complete and sound, and it can be
estimated sample-efficiently in the prediction-only access setting. Along the way,
we also obtain novel combinatorial structural results when applying this measure
to perfectly calibrated predictors
Submission Number: 206
Loading