(Amplified) Banded Matrix Factorization: A unified approach to private training

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Machine Learning, Differential Privacy, Optimization, Private Machine Learning, Federated Learning, Privacy Amplification, Matrix Factorization
TL;DR: We propose a new class of banded matrix mechanisms that (a) are compatible with cross-device FL and (b) fully subsume amplified DP-SGD. This enables MF-DP-FTRL approaches to match or outperform DP-SGD across all $\varepsilon$.
Abstract: Matrix factorization (MF) mechanisms for differential privacy (DP) have substantially improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a variety of scenarios, but in both the centralized and federated settings there remain instances where either MF cannot be easily applied, or other algorithms provide better tradeoffs (typically, as $\epsilon$ becomes small). In this work, we show how MF can subsume prior state-of-the-art algorithms in both federated and centralized training settings, across all privacy budgets. The key technique throughout is the construction of MF mechanisms with banded matrices (lower-triangular matrices with at most $\hat{b}$ nonzero bands including the main diagonal). For cross-device federated learning (FL), this enables multiple-participations with a relaxed device participation schema compatible with practical FL infrastructure (as demonstrated by a production deployment). In the centralized setting, we prove that banded matrices enjoy the same privacy amplification results as the ubiquitous DP-SGD algorithm, but can provide strictly better performance in most scenarios---this lets us always at least match DP-SGD, and often outperform it
Supplementary Material: pdf
Submission Number: 8945
Loading