Keywords: multi-objective learning, online learning, multiaccuracy, multicalibration, adaptive regret
Abstract: We consider the general problem of learning a predictor that satisfies multiple objectives of interest simultaneously. We work in an online setting where the data distribution can change arbitrarily over time. Here, multi-objective learning captures many common targets such as online calibration, regret, and multiaccuracy. In the online setting, common approaches to this problem that minimize the set of objectives over the entire time horizon can fail to adapt to distribution shifts. Previous work has tried to alleviate this problem by incorporating additional objectives that target local guarantees over contiguous subintervals. However, empirical evaluations of the performance of this proposal in practice are sparse. In this article, we consider an alternative procedure that achieves local adaptivity by replacing one part of the multi-objective learning method with an adaptive online algorithm. Empirical evaluations on datasets from energy forecasting and algorithmic fairness show that our proposed method improves upon existing proposals and achieves unbiased predictions over subgroups, while remaining robust under distribution shift.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 22679
Loading