Abstract: In this paper, we propose a decentralized Newton-type approach to solve the problem of decentralized federated learning (FL). Notably, our proposed algorithm leverages the fast convergence of the second-order methods while avoid sending the hessian matrix at each iteration. Therefore, the proposed approach significantly reduces the communication cost and preserves the privacy. Specifically, we alternate between two problems. The inner problem approximates the inverse Hessian-gradient product which is formulated as a quadratic optimization problem and approximately solved in a decentralized manner using one step of the group alternating direction method of multipliers (GADMM) method. The outer problem learns the model, which is solved by performing one decentralized Newton step at every iteration. Moreover, to reduce the communication-overhead per iteration, a quantized version (leveraging stochastic quantization) is also proposed. Simulation results illustrate that our algorithm outperforms the baselines of GADMM, Q-GADMM, Newton tracking, and Decentralized SGD, and provides energy and communication-efficient solutions for bandwidth-limited systems under different SNR regimes.
Loading