Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
Abstract: In this paper we propose three p-th order tensor methods for μ-strongly-convex-strongly-concave saddle point problems (SPP). The first method is based on the assumption of p-th order smoothness of the objective and it achieves a convergence rate of O((LpRp−1μ)2p+1logμR2εG), where R is an estimate of the initial distance to the solution, and εG is the error in terms of duality gap. Under additional assumptions of first and second order smoothness of the objective we connect the first method with a locally superlinear converging algorithm and develop a second method with the complexity of O⎛⎝⎜⎜(LpRp−1μ)2p+1logL2Rmax{1,L1μ}μ+loglogL312μ2εGlogL1L2μ2⎞⎠⎟⎟. The third method is a modified version of the second method, and it solves gradient norm minimization SPP with Õ ((LpRpε∇)2p+1) oracle calls, where ε∇ is an error in terms of norm of the gradient of the objective. Since we treat SPP as a particular case of variational inequalities, we also propose three methods for strongly monotone variational inequalities with the same complexity as the described above.
0 Replies
Loading