Keywords: Minimax optimization, Continual learning
Abstract: This paper considers the continual finite-sum convex-concave minimax optimization. We seek a sequence $(x^* _1, y^* _1 ), \dots, (x^* _n, y^* _n )$ which corresponds to the saddle points of prefix-sum functions $\\\{g _i(x, y) \coloneqq \sum _{j=1}^i f _j(x, y) / i\\\} _{i=1}^n$, where each component function $f _j\colon \mathbb{R}^{d _x} \times \mathbb{R}^{d _y} \to \mathbb{R}$ is strongly-convex-strongly-concave and feasible sets $\mathcal{X} \subseteq \mathbb{R}^{d _x}$ and $\mathcal{Y} \subseteq \mathbb{R}^{d _y}$ are convex and compact. We propose an efficient stochastic first-order algorithm that finds a sequence of $\epsilon$-saddle points for the continual finite-sum minimax optimization problem. In particular, our approach sparsely constructs the full gradient across all stages, and it leverages the extragradient iteration to achieve a sharper incremental first-order oracle complexity compared with existing methods. We also extend our methods to solve the continual finite-sum minimax optimization problem in the general convex-concave setting. Furthermore, we conduct numerical experiments that demonstrate the effectiveness of our approaches.
Primary Area: optimization
Submission Number: 15444
Loading