Variance Reduced Distributed Non-Convex Optimization Using Matrix Stepsizes

ICLR 2026 Conference Submission11740 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Non-Convex Optimization, Optimization
TL;DR: This paper proposes two variance reduced matrix step-sized federated learning algorithms for non-convex objectives.
Abstract: Matrix-stepsized gradient descent algorithms have been shown to have superior performance in non-convex optimization problems compared to their scalar counterparts. The det-CGD algorithm, as introduced by Li et al. (2023), leverages matrix stepsizes to perform compressed gradient descent for non-convex objectives and matrix smooth problems in a federated manner. The authors establish the algorithm’s convergence to a neighborhood of a weighted stationarity point under a convex condition for the symmetric and positive-definite matrix stepsize. In this paper, we propose two variance-reduced versions of the det-CGD algorithm, incorporating MARINA and DASHA methods. Notably, we establish theoretically and empirically, that det-MARINA and det-DASHA outperform MARINA, DASHA and the distributed det-CGD algorithms in terms of iteration and communication complexities.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 11740
Loading