Keywords: Federated Learning, Error Feedback, L-BFGS, Biased Compression, Communication-Efficient Optimization, PL Condition
TL;DR: We propose EF21+L-BFGS, a communication-efficient Quasi-Newton method for federated learning that combines curvature-aware updates with error feedback, achieving faster convergence and better performance under biased compression.
Abstract: In this paper, we propose a new class of Quasi-Newton methods for federated learning by integrating them with the error feedback framework—specifically focusing on the EF21 mechanism, which offers stronger theoretical guarantees and improved practical performance compared to earlier approaches. EF21 overcomes several limitations of prior methods, such as dependence on strong assumptions and high communication overhead.
Quasi-Newton methods, particularly the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, are renowned for their empirical efficiency. By leveraging this, our proposed EF21+L-BFGS algorithm achieves an $\mathcal{O}\left(\tfrac{1}{T}\right)$ convergence rate in the nonconvex setting and enjoys linear convergence under the Polyak–Łojasiewicz (PL) condition. Through both theoretical analysis and empirical evaluations, we demonstrate the effectiveness of our approach, showing faster convergence and improved model performance compared to existing methods.
Submission Number: 148
Loading