Keywords: online learning, online convex optimization, online linear control
TL;DR: We introduce an online convex optimization framework that captures long-term dependence of current losses on past decisions. We prove matching upper and lower bounds on policy regret, and derive strong regret bounds for a variety of problems.
Abstract: Online convex optimization (OCO) is a widely used framework in online
learning. In each round, the learner chooses a decision in a convex set and an
adversary chooses a convex loss function, and then the learner suffers the
loss associated with their current decision. However, in many applications the
learner's loss depends not only on the current decision but on the entire
history of decisions until that point. The OCO framework and its existing
generalizations do not capture this, and they can only be applied to many
settings of interest after a long series of approximation arguments. They also
leave open the question of whether the dependence on memory is tight because
there are no non-trivial lower bounds. In this work we introduce a
generalization of the OCO framework, ``Online Convex Optimization with
Unbounded Memory'', that captures long-term dependence on past decisions. We
introduce the notion of $p$-effective memory capacity, $H_p$, that quantifies
the maximum influence of past decisions on present losses. We prove an
$O(\sqrt{H_p T})$ upper bound on the policy regret and a matching (worst-case)
lower bound. As a special case, we prove the first non-trivial lower bound for
OCO with finite memory~\citep{anavaHM2015online}, which could be of
independent interest, and also improve existing upper bounds. We demonstrate
the broad applicability of our framework by using it to derive regret bounds,
and to improve and simplify existing regret bound derivations, for a variety
of online learning problems including online linear control and an online
variant of performative prediction.
Supplementary Material: pdf
Submission Number: 2875
Loading