## Escape saddle points by a simple gradient-descent based algorithm

21 May 2021, 20:49 (edited 27 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
• Keywords: saddle points, gradient descent, stochastic optimization, nonconvex optimization, negative curvature finding
• TL;DR: We propose a simple gradient-based algorithm to find an eps-approx. second-order stationary point of an n-dim function in ~O(log n/eps^1.75) iterations, achieving poly-speedup in log n. It's also applicable to stochastic optimization.
• Abstract: Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$, it outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log n/\epsilon^{1.75})$ iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}(\log^4 n/\epsilon^{2})$ or $\tilde{O}(\log^6 n/\epsilon^{1.75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$. For the stochastic setting, our algorithm outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log^{2} n/\epsilon^{4})$ iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in $\log n$ compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.
• Supplementary Material: zip
• Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
11 Replies