Similarity and Separation of Last-Iterate Convergence between Optimism and Reflected Algorithms in Time-Varying Games

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: time-varying games, Nash equilibrium, reflected gradient algorithm, optimistic gradient algorithm
Abstract: In this paper, we investigate the behaviours of reflected gradient (RG), accelerated reflected gradient (ARG), and optimistic gradient (OG) algorithms in multi-player games modelled as variational inequalities with $L$-smooth continuous monotone limits in convergent time-varying cases and with $L$-smooth continuous and monotone games at each time in periodic cases, both in convex action sets. The RG, ARG, and OG algorithms require fewer complex calculations, i.e., on the gradients and projections per iteration. We prove that a convergence rate of $O(1/\sqrt{T})$ and $O(1/T)$ can be reached by the RG and ARG algorithms with bounded action sets for convergent perturbed monotone games, respectively, if the sequence of time-varying games converges to the limit fast enough, without additional assumptions like strong monotonicity, and such a result matches and improves the existing results on similar algorithms requiring calculations on two gradients with different actions. Besides, a surprising result is also shown that the standard OG algorithm in time-varying games behaves dramatically differently from its variant and other similar algorithms: the standard OG algorithm converges in any sequences of time-varying monotone $L$-smooth games with a common Nash equilibrium, including some periodic games, while at the same time, its variant with a slight difference diverges exponentially even in periodic games. We also show that the RG and ARG algorithms diverge exponentially in some periodic games.
Primary Area: learning theory
Submission Number: 12256
Loading