Local Differential Privacy for Decentralized Online Stochastic Optimization With Guaranteed Optimality and Convergence Speed
Abstract: The increasing usage of streaming data has raised significant privacy concerns in decentralized optimization and learning applications. To address this issue, differential privacy (DP) has emerged as a standard approach for privacy protection in decentralized online optimization. Regrettably, existing DP solutions for decentralized online optimization face the dilemma of trading optimization accuracy for privacy. In this article, we propose a local-DP solution for decentralized online optimization/learning that ensures both optimization accuracy and rigorous DP, even in the infinite time horizon. Compared with our prior results that rely on a decaying coupling strength to gradually eliminate the influence of DP noises, the proposed approach allows the coupling strength to be time-invariant, which ensures a high convergence speed. Moreover, different from prior results which rely on precise gradient information to ensure optimality, the proposed approach can ensure convergence in mean square to the optimal solution even in the presence of stochastic gradients. We corroborate the effectiveness of our algorithm using multiple benchmark machine-learning applications, including logistic regression on the “mushrooms” dataset and convolutional neural network-based image classification on the “MNIST” and “CIFAR-10” datasets.
External IDs:dblp:journals/tac/ChenW25b
Loading