Keywords: multi-objective learning, online learning, multiaccuracy, multicalibration, adaptive regret
Abstract: We consider the general problem of learning a predictor that satisfies multiple objectives of interest simultaneously. We work in an online setting where the data distribution can change arbitrarily over time. Here, multi-objective learning captures many common targets such as online calibration, regret, and multiaccuracy. In the online setting, existing approaches to this problem that minimize the set of objectives over the entire time horizon can fail to adapt to distribution shifts. We correct this and propose algorithms that guarantee small error for all objectives over any local time interval of a given width. Empirical evaluations on datasets from energy forecasting and algorithmic fairness show that our methods can be used to guarantee unbiasedness of the predictions over subgroups of concern and ensure robustness under distribution shift.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 22679
Loading