Keywords: Calibration, Uncertainty Quantification, Decision Making
Abstract: When model predictions inform downstream decisions, a natural question is under what conditions can the decision-makers simply respond to the predictions as if they were the true outcomes. The recently proposed notion of decision calibration addresses this by requiring predictions to be unbiased conditional on the best-response actions induced by the predictions. This relaxation of classical calibration avoids the exponential sample complexity in high-dimensional outcome spaces.
However, existing guarantees are limited to linear losses. A natural strategy for nonlinear losses is to embed outcomes $y$ into an $m$-dimensional feature space $\phi(y)$ and approximate losses linearly in $\phi(y)$. Yet even simple nonlinear functions can demand exponentially large or infinite feature dimensions, raising the open question of whether decision calibration can be achieved with complexity independent of the feature dimension $m$. We begin with a negative result: even verifying decision calibration under standard deterministic best response inherently requires sample complexity polynomial in $m$.
To overcome this barrier, we study a smooth variant where agents follow quantal responses. This smooth relaxation admits dimension-free algorithms: given $\mathrm{poly}(|\mathcal{A}|,1/\epsilon)$ samples and any initial predictor $p$, our introducded algorithm efficiently test and achieve decision calibration for broad function classes which can be well-approximated by bounded-norm functions in (possibly infinite-dimensional) separable RKHS, including piecewise linear and Cobb–Douglas loss functions.
Primary Area: learning theory
Submission Number: 12970
Loading