Aggregating Algorithm and Axiomatic Loss Aggregation

TMLR Paper3513 Authors

18 Oct 2024 (modified: 22 Jul 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Supervised learning has gone beyond the empirical risk minimization framework. Central to most of these developments is the introduction of more general aggregation functions for losses incurred by the learner. In this paper, we turn towards online learning under expert advice. Via easily justified assumptions we characterize a set of reasonable loss aggregation functions as quasi-sums. Based upon this insight, we suggest how to tailor Vovk's Aggregating Algorithm to these more general aggregation functions. The "change of variables" we propose, let us highlight that "weighting profiles" determine the contribution of each expert to the next prediction according to their loss and the multiplicative structure of the weight updates in the Aggregating Algorithm translates into the additive structure of the loss aggregation in the regret bound. In addition, we suggest that the mixability of the loss function, which is functionally necessary for the Aggregating Algorithm, is intrinsically relative to the log loss, because the standard aggregation of losses in online learning is the sum. Finally, we conceptually and empirically argue that our generalized loss aggregation functions express the attitude of the learner towards losses.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The changes to the last submission are highlighted in the supplementary file "Aggregating Algorithm and Axiomatic Loss Aggregation_EDITS.pdf". We only marked substantial changes and did not trace the typos we corrected.
Assigned Action Editor: ~Benjamin_Guedj1
Submission Number: 3513
Loading