MARINA Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization

Published: 28 Oct 2023, Last Modified: 05 Dec 2023FL@FM-NeurIPS’23 PosterEveryoneRevisionsBibTeX
Student Author Indication: Yes
Keywords: Federated Learning, Non-Convex Optimization, Optimization
TL;DR: This paper proposes a variance reduced matrix step-sized federated learning algorithm for non-convex objectives.
Abstract: Matrix-stepsized gradient descent algorithms have been demonstrated to exhibit superior efficiency in non-convex optimization compared to their scalar counterparts. The det-CGD algorithm, as introduced by [LKR23], leverages matrix stepsizes to perform compressed gradient descent for non-convex objectives and matrix-smooth problems in a federated manner. The authors establish the algorithm's convergence to a neighborhood of the weighted stationarity point under a convex condition for the symmetric and positive-definite stepsize matrix. In this paper, we propose a variance-reduced version of the det-CGD algorithm, incorporating the MARINA method. Notably, we establish theoretically and empirically, that det-MARINA outperforms both MARINA and the distributed MARINA algorithms
Submission Number: 4
Loading