Last-iterate convergence rates for min-max optimizationDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: min-max optimization, zero-sum game, saddle point, last-iterate convergence, non-asymptotic convergence, global rates, Hamiltonian, sufficiently bilinear
TL;DR: We prove that global linear last-iterate convergence rates are achievable for more general classes of convex-concave min-max optimization problems than had previously been shown.
Abstract: While classic work in convex-concave min-max optimization relies on average-iterate convergence results, the emergence of nonconvex applications such as training Generative Adversarial Networks has led to renewed interest in last-iterate convergence guarantees. Proving last-iterate convergence is challenging because many natural algorithms, such as Simultaneous Gradient Descent/Ascent, provably diverge or cycle even in simple convex-concave min-max settings, and previous work on global last-iterate convergence rates has been limited to the bilinear and convex-strongly concave settings. In this work, we show that the Hamiltonian Gradient Descent (HGD) algorithm achieves linear convergence in a variety of more general settings, including convex-concave problems that satisfy a “sufficiently bilinear” condition. We also prove similar convergence rates for some parameter settings of the Consensus Optimization (CO) algorithm of Mescheder et al. 2017.
Original Pdf: pdf
9 Replies

Loading