Keywords: Privacy-preserving machine learning, Federated Learning, Differential Privacy, Secure Multi-Party Computation, Benchmarking, System cost, Energy consumption, Resource monitoring, PrivacyBench framework
TL;DR: PrivacyBench benchmarks utility, computational cost, and energy of hybrid privacy ML (FL, DP, SMPC), revealing non-additive interactions—from efficiency gains to catastrophic FL+DP failures—and underscoring the need for holistic system-level design.
Abstract: Privacy-preserving machine learning techniques are increasingly deployed in hybrid combinations, yet their system-level interactions remain poorly understood. We introduce PRIVACYBENCH, a comprehensive framework that reveals non-additive behaviors in privacy technique combinations with significant performance and resource implications. Evaluating Federated Learning (FL), Differential Privacy (DP), and Secure Multi-Party Computation (SMPC) across ResNet18 and ViT models on medical datasets, we uncover striking disparities: while FL and FL+SMPC preserve utility with modest overhead, FL+DP combos exhibit severe convergence issues—accuracy drops from 98% to 13%, training time increases 16×, and energy consumption rises 20×. PRIVACYBENCH provides the first systematic evaluation framework to jointly track utility, cost, and environmental impact across privacy configs. These findings demonstrate that privacy techniques cannot be treated as modular components and highlight critical considerations for deploying privacy-preserving ML systems in resource-constrained environments.
Submission Number: 125
Loading