Differentially Private Dynamic Average Consensus-Based Newton Method for Distributed Optimization Over General Networks

Published: 01 Jan 2025, Last Modified: 13 May 2025IEEE Trans. Syst. Man Cybern. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This article investigates the issue of privacy preservation in distributed optimization, where each node possesses a local private objective function and collaborates to minimize the sum of those functions. A novel dynamic average consensus-based distributed Newton algorithm is introduced to achieve consensus, optimality, and differential privacy. Each node utilizes its local gradient and Hessian as time-varying reference signals, facilitating information exchange with neighbors for tracking the average. To safeguard privacy, persistent Laplace noise is introduced into the exchanged data, affecting the estimated optimal solution, gradient, and Hessian averages. To counteract the noise’s impact, the internode coupling strength is adaptively reduced over time through decay factors, allowing for noise attenuation as the algorithm progresses. The algorithm’s convergence to the optimal solution, assuming global function smoothness and strong convexity, is theoretically proven. The algorithm’s accurate convergence to the optimal solution, assuming global function smoothness and strong convexity, is theoretically proven. Furthermore, the efficiency and reliability of the algorithm are empirically validated through simulations of an IEEE 14-bus test system.
Loading