Generalization Bounds of Nonconvex-(Strongly)-Concave Stochastic Minimax Optimization

Published: 06 Feb 2023, Last Modified: 30 Jan 2024AISTATS 2024EveryoneCC BY 4.0
Abstract: This paper studies the generalization performance of algorithms for solving nonconvex-(strongly)-concave (NC-SC\,/\,NC-C) stochastic minimax optimization measured by the stationarity of primal functions. We first establish \textit{algorithm-agnostic generalization bounds} via \emph{uniform convergence} between the empirical minimax problem and the population minimax problem. The sample complexities for achieving $\eps$-generalization are $\tilde{\mathcal{O}}(d\kappa^2\epsilon^{-2})$ and $\tilde{\mathcal{O}}(d\epsilon^{-4})$ for NC-SC and NC-C settings, respectively, where $d$ is the dimension of the primal variable and $\kappa$ is the condition number. We further study the \textit{algorithm-dependent generalization bounds} via stability arguments of algorithms. In particular, we introduce a novel stability notion for minimax problems and build a connection between stability and generalization. As a result, we establish \textit{algorithm-dependent generalization bounds} for \emph{stochastic gradient descent ascent (SGDA)} and the more general \emph{sampling-determined algorithms (SDA)}.
Loading