Keywords: Differential Privacy, Federated Learning, Non-convex Optimization
Abstract: This paper studies the problem of distributed non-convex optimization under privacy requirements. We develop a differentially private communication efficient algorithm and study its privacy and utility trade-offs. By introducing the shuffled model into our algorithmic design, we are able to achieve strong privacy and utility guarantees without relying on a trusted central server. We further show that our proposed method can achieve improved utility guarantees (faster convergence rates) compared to previous approaches. Additionally, we present preliminary experimental results to corroborate our theoretical findings.
Submission Number: 33
Loading