Keywords: variational inequalities, multi-agent optimization, stochastic algorithms
TL;DR: Cubic Acceleration in Finite-Sum VIPs
Abstract: From adversarial robustness to multi-agent learning, many machine learning tasks can be cast as finite-sum min–max optimization or, more generally, as variational inequality problems (VIPs). Owing to their simplicity and scalability, stochastic gradient methods with constant step size are widely used, despite the fact that they converge only up to a bias term. Among the many heuristics adopted in practice, two classical techniques have recently attracted attention to mitigate this issue: \emph{\small Random Reshuffling} of data and \emph{\small Richardson–Romberg extrapolation} across iterates.
In this work, we show that their composition not only cancels the leading linear bias term, but also yields an asymptotic cubic refinement. To the best of our knowledge, our work provides the first theoretical guarantees for such a synergy in structured non-monotone VIPs. Our analysis proceeds in two steps: (i) by smoothing the discrete noise induced by reshuffling, we leverage tools from continuous-state Markov chain theory to establish a law of large numbers and a central limit theorem for its iterates; and (ii) we employ spectral tensor techniques to prove that extrapolation
debiases and sharpens the asymptotic behavior %accelerates convergence
even under the biased gradient oracle induced by reshuffling. Finally, extensive experiments validate our theory, consistently demonstrating substantial speedups in practice.
Supplementary Material: pdf
Primary Area: optimization
Submission Number: 21936
Loading