Last Iterate Convergence of Popov Method for Non-monotone Stochastic Variational Inequalities

Published: 26 Oct 2023, Last Modified: 13 Dec 2023NeurIPS 2023 Workshop OralEveryoneRevisionsBibTeX
Keywords: non-monotone variational inequalities, optimistic gradient method, saddle point problems, stochastic methods
Abstract: This paper focuses on non-monotone stochastic variational inequalities (SVIs) that may not have a unique solution. A commonly used efficient algorithm to solve VIs is the Popov method, which is known to have the optimal convergence rate for VIs with Lipschitz continuous and strongly monotone operators. We introduce a broader class of structured non-monotone operators, namely *$p$-quasi-sharp* operators *$p> 0$*, which allows tractably analyzing convergence behavior of algorithms. We show that the stochastic Popov method converges \emph{almost surely} to a solution for all operators from this class under a *linear growth*. In addition, we obtain the last iterate convergence rate (in expectation) for the method under a *linear growth* condition for $2$-quasi-sharp operators. Based on our analysis, we refine the results for smooth $2$-quasi-sharp and $p$-quasi-sharp operators (on a compact set), and obtain the optimal convergence rates.
Submission Number: 99
Loading