Keywords: Variational Quantum Algorithms, L-smoothness, Optimization Landscapes
Abstract: The successful gradient-based training of Variational Quantum Algorithms (VQAs) hinges on the $L$-smoothness of their optimization landscapes—a property that bounds curvature and ensures stable convergence. While $L$-smoothness is a common assumption for analyzing VQA optimizers, there has been a need for a more direct proof for general circuits, a tighter bound for practical guidance, and principled methods that connect landscape geometry to circuit design. We address these gaps with three core contributions. First, we provide an intuitive proof of L-smoothness and derive a new bound on the smoothness constant, $L \le 4||M||_{2}\sum_{k=1}^{P}||G_{k}||_{2}^{2}$, that is never looser and often strictly tighter than previously known. Second, we show that this bound reliably predicts the scaling behavior of curvature in deep circuits and identify a saturation effect that serves as a direct geometric signature of inefficient overparameterization. Third, we leverage this predictable scaling to introduce an efficient heuristic for setting near-optimal learning rates.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 15126
Loading