A Variant of Anderson Mixing with Minimal Memory SizeDownload PDF

Published: 31 Oct 2022, Last Modified: 23 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Anderson mixing, unconstrained optimization, stochastic optimization
TL;DR: We develop a new variant of Anderson mixing that only needs to store one vector pair and has promising results on deterministic optimization and training neural networks.
Abstract: Anderson mixing (AM) is a useful method that can accelerate fixed-point iterations by exploring the information from historical iterations. Despite its numerical success in various applications, the memory requirement in AM remains a bottleneck when solving large-scale optimization problems in a resource-limited machine. To address this problem, we propose a novel variant of AM method, called Min-AM, by storing only one vector pair, that is the minimal memory size requirement in AM. Our method forms a symmetric approximation to the inverse Hessian matrix and is proved to be equivalent to the full-memory Type-I AM for solving strongly convex quadratic optimization. Moreover, for general nonlinear optimization problems, we establish the convergence properties of Min-AM under reasonable assumptions and show that the mixing parameters can be adaptively chosen by estimating the eigenvalues of the Hessian. Finally, we extend Min-AM to solve stochastic programming problems. Experimental results on logistic regression and network training problems validate the effectiveness of the proposed Min-AM.
Supplementary Material: zip
23 Replies

Loading