Input and Output Privacy in Cross-Silo Federated Settings: an MPC+DP Approach

TMLR Paper4562 Authors

26 Mar 2025 (modified: 20 Jun 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We address the problem of training a machine learning model on data held by multiple data holders in a cross-silo federated setup while ensuring privacy guarantees. Existing Federated Learning (FL) solutions with Differential Privacy (DP) or Secure Multiparty Computation (MPC) with DP are often limited to either horizontal or vertical partitioning and typically suffer from accuracy loss compared to a centralized setting. We propose an MPC-based approach for training differentially private linear models that supports any partitioning scenario and effectively combines MPC and DP. Our solution employs MPC protocols for both model training and output perturbation using Laplace-like noise. By simulating a trusted curator through MPC, our approach provides the benefits of global DP without requiring an actual trusted party. The resulting MPC+DP method achieves accuracy comparable to a centralized DP setup while maintaining privacy guarantees in a cross-silo federated setup.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Antti_Koskela1
Submission Number: 4562
Loading