Dynamic Regret Bounds without Lipschitz Continuity: Online Convex Optimization with Multiple Mirror Descent Steps

Published: 2022, Last Modified: 01 Apr 2026ACC 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We study the dynamic regret in online convex optimization (OCO), where the cost functions are revealed sequentially over time. Prior studies on the dynamic regret of OCO algorithms often require the cost functions to be Lipschitz continuous. However, the costs functions that arise in many applications may not satisfy this condition. In this work, we analyze the performance of Online Multiple Mirror Descent (OMMD), which can handle non-Lipschitz cost functions. OMMD is based on mirror descent but uses multiple mirror descent steps per online round. We first derive two upper bounds on the dynamic regret based on the path length and squared path length, and we further derive a third upper bound based on the cumulative optimal cost, which can be much smaller than the path length or squared path length especially when the sequence of minimizers fluctuates over time. We show that the dynamic regret of OMMD scales linearly with the minimum among the path length, squared path length, and cumulative optimal cost.
Loading