Panprediction: Optimal Predictions for Any Downstream Task and Loss

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 OralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Simultaneously minimizing infinitely many losses on infinitely many tasks can be as statistically easy as minimizing one loss on one task.
Abstract: Machine learning is classically formulated as optimizing a model for a known task and loss function. However, an emerging paradigm instead views model training as extracting enough information from a dataset so that the model—after light post-processing—can minimize any loss on any downstream task. This paper formalizes a mathematical framework for this paradigm, which we call panprediction. Panprediction generalizes the problem of omniprediction (Gopalan et al., 2021) and sits upstream from the problem of multi-group learning (Rothblum and Yona, 2021), which respectively focus on predictions that generalize to many downstream losses or many downstream tasks, but not both. We show that panprediction admits a nearly lossless reduction to a statistically efficient notion of calibration, called step calibration. Using this reduction, we design deterministic and randomized panpredictors that can be learned with $\tilde{O}(1/\varepsilon^3)$ and $\tilde{O}(1/\varepsilon^2)$ samples, respectively. This improves the best known sample complexity guarantee of deterministic omniprediction, and matches the best known sample complexity guarantees of randomized omniprediction and both deterministic and randomized multi-group learning. Our results demonstrate that simultaneously minimizing infinitely many losses on infinitely many tasks can be as statistically easy as minimizing one loss on one task.
Submission Number: 1888
Loading