TL;DR: We propose a novel search direction for stochastic online bilevel optimization, enabling first- and zeroth-order algorithms to achieve sublinear regret without window smoothing.
Abstract: Online bilevel optimization (OBO) is a powerful framework for machine learning problems where both outer and inner objectives evolve over time, requiring dynamic updates. Current OBO approaches rely on deterministic \textit{window-smoothed} regret minimization, which may not accurately reflect system performance when functions change rapidly. In this work, we introduce a novel search direction and show that both first- and zeroth-order (ZO) stochastic OBO algorithms leveraging this direction achieve sublinear {stochastic bilevel regret without window smoothing}. Beyond these guarantees, our framework enhances efficiency by: (i) reducing oracle dependence in hypergradient estimation, (ii) updating inner and outer variables alongside the linear system solution, and (iii) employing ZO-based estimation of Hessians, Jacobians, and gradients. Experiments on online parametric loss tuning and black-box adversarial attacks validate our approach.
Primary Area: General Machine Learning->Online Learning, Active Learning and Bandits
Keywords: Online Learning, Bilevel Learning, Zeroth- and First-Order Methods
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Submission Number: 15387
Loading