Near-Optimal Algorithm with Complexity Separation for Strongly Convex-Strongly Concave Composite Saddle Point Problems

Published: 20 Sept 2024, Last Modified: 20 Sept 2024ICOMP PublicationEveryoneRevisionsBibTeXCC BY 4.0
Keywords: saddle point problem, composite optimization, bilinear saddle-point problem, sliding technique, complexity separation
Abstract: In this work, we revisit the saddle-point problem $\min_x \max_y p(x) + R(x,y) - q(y)$, where the function $R(x,y)$ is $L_R$-smooth, $\mu_x$-strongly convex, and $\mu_y$-strongly concave, and the functions $p(x), q(y)$ are convex and $L_p, L_q$-smooth, respectively. We develop a new algorithm that achieves separation of complexities with respect to the computation of the gradients $\nabla R(x,y)$ and $\nabla p(x)$, $\nabla q(y)$. In particular, our algorithm requires $\mathcal{O}\left(\left(\sqrt{\frac{L_p}{\mu_x}} + \sqrt{\frac{L_q}{\mu_y}} + \frac{L_R}{\sqrt{\mu_x \mu_y}}\right)\log \frac{1}{\varepsilon}\right)$ computations of the gradient $\nabla R(x,y)$ and $\mathcal{O}\left(\left(\sqrt{\frac{L_p}{\mu_x}} + \sqrt{\frac{L_q}{\mu_y}}\right) \log \frac{1}{\varepsilon}\right)$ computations of the gradients $\nabla p(x)$, $\nabla q(y)$ to find an $\epsilon$-accurate solution to the problem. Moreover, under the condition $L_R \geq \sqrt{\mu_x L_q + \mu_y L_p}$, the algorithm becomes optimal (up to logarithmic factors), i.e., it cannot be improved due to the existing lower complexity bounds. To the best of our knowledge, our algorithm is the first to achieve near-optimal complexity separation in the case when $\mu_x \neq \mu_y$.
Submission Number: 83
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview