Sharper Analysis of Single-Loop Methods for Bilevel Optimization

ICLR 2026 Conference Submission16920 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bilevel optimization, upper bounds, convergence rate, hypergradient estimation
TL;DR: Sharper convergence analyses are provided for the single-loop bilevel optimization algorithms.
Abstract: Bilevel optimization underpins many machine learning applications, including hyperparameter optimization, meta-learning, neural architecture search, and reinforcement learning. While hypergradient-based methods have advanced significantly, a gap persists between theoretical guarantees—typically derived for multi-loop algorithms—and practical single-loop implementations required for efficiency. This work narrows that gap by establishing sharper convergence results for single-loop approximate implicit differentiation (AID) and iterative differentiation (ITD) methods. For AID, we improve the convergence rate from $\mathcal{O}(\kappa^6/K)$ to $\mathcal{O}(\kappa^5/K)$, where $\kappa$ is the condition number of the inner-level problem. For ITD, we prove that the asymptotic error is $\mathcal{O}(\kappa^2)$, exactly matching the known lower bound and improving upon the previous $\mathcal{O}(\kappa^3)$ guarantee. We further validate the refined analyses by the experiments on synthetic bilevel optimization tasks.
Primary Area: optimization
Submission Number: 16920
Loading