Adaptive pruning-based Newton's method for distributed learning

Published: 01 Jan 2025, Last Modified: 12 Feb 2025Theor. Comput. Sci. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Newton's method leverages curvature information to boost performance, and thus outperforms first-order methods for distributed learning problems. However, Newton's method is not practical in large-scale and heterogeneous learning environments, due to obstacles such as high computation and communication costs of the Hessian matrix, sub-model diversity, staleness of training, and data heterogeneity. To overcome these obstacles, this paper presents a novel and efficient algorithm named Distributed Adaptive Newton Learning (DANL), which solves the drawbacks of Newton's method by using a simple Hessian initialization and adaptive allocation of training regions. The algorithm exhibits remarkable convergence properties, which are rigorously examined under standard assumptions in stochastic optimization. The theoretical analysis proves that DANL attains a linear convergence rate while efficiently adapting to available resources and keeping high efficiency. Furthermore, DANL shows notable independence from the condition number of the problem and removes the necessity for complex parameter tuning. Experiments demonstrate that DANL achieves linear convergence with efficient communication and strong performance across different datasets.
Loading