Swap Agnostic Learning, or Characterizing Omniprediction via Multicalibration

Published: 21 Sept 2023, Last Modified: 14 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Agnostic Learning, Omniprediction, Multicalibration
TL;DR: Introduces swap agnostic learning and shows feasibility via a suprising equivalence to swap variants of omniprediction and multicalibration.
Abstract: We introduce and study the notion of Swap Agnostic Learning. The problem can be phrased as a game between a *predictor* and an *adversary*: first, the predictor selects a hypothesis $h$; then, the adversary plays in response, and for each level set of the predictor, selects a loss-minimizing hypothesis $c_v \in \mathcal{C}$; the predictor wins if $h$ competes with the adaptive adversary's loss. Despite the strength of the adversary, our main result demonstrates the feasibility Swap Agnostic Learning for any convex loss. Somewhat surprisingly, the result follows by proving an *equivalence* between Swap Agnostic Learning and swap variants of the recent notions Omniprediction (ITCS'22) and Multicalibration (ICML'18). Beyond this equivalence, we establish further connections to the literature on Outcome Indistinguishability (STOC'20, ITCS'23), revealing a unified notion of OI that captures all existing notions of omniprediction and multicalibration.
Supplementary Material: pdf
Submission Number: 2624