Intermediate Gradient Methods with Relative Inexactness

Published: 2025, Last Modified: 28 Jan 2026J. Optim. Theory Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper is devoted to studying first-order methods for smooth convex optimization with inexact gradients. Unlike the majority of the literature on this topic, we consider the setting of relative rather than absolute inexactness. More precisely, we assume that the additive error in the gradient is proportional to the gradient norm, rather than being globally bounded by some small quantity. We propose a novel analysis of the accelerated gradient method under relative inexactness and strong convexity, improving the bound on the maximum admissible error that preserves the algorithm’s linear convergence. In other words, we analyze the robustness of the accelerated gradient method to relative gradient inexactness. Furthermore, using the Performance Estimation Problem (PEP) technique, we demonstrate that the obtained result is tight up to a numerical constant. Motivated by existing intermediate methods with absolute error, i.e., methods which convergence rates interpolate between the slower but more robust non-accelerated algorithms and the faster yet less robust accelerated algorithms, we propose an adaptive variant of the intermediate gradient method with relative gradient error.
Loading