Keywords: Online Learning, Replicability
TL;DR: We demonstrate adversarially replicable online learning algorithms for online linear optimization and the experts problem that achieve sub-linear regret.
Abstract: We investigate the concept of algorithmic replicability introduced by Impagliazzo et al.(2022) in an online setting. In our model, the input sequence received by the online learner is generated from time-varying distributions chosen by an adversary (obliviously). Our objective is to design low-regret online algorithms that, with high probability, produce the \emph{exact same sequence} of actions when run on two independently sampled input sequences generated as described above. We refer to such algorithms as adversarially replicable.
Previous works explored replicability in the online setting under inputs generated independently from a fixed distribution; we term this notion as iid-replicability. Our model generalizes to capture both adversarial and iid input sequences, as well as their mixtures, which can be modeled by setting certain distributions as point-masses.
We demonstrate adversarially replicable online learning algorithms for online linear optimization and the experts problem that achieve sub-linear regret. Additionally, we propose a general framework for converting an online learner into an adversarially replicable one within our setting, bounding the new regret in terms of the original algorithm’s regret. We also present a nearly optimal (in terms of regret) iid-replicable online algorithm for the experts problem, highlighting the distinction between the iid and adversarial notions of replicability.
Finally, we establish lower bounds on the regret (in terms of the replicability parameter and time) that any replicable online algorithm must incur.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 24226
Loading