Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

Published: 19 Jun 2023, Last Modified: 23 Jul 2023FL-ICML 2023EveryoneRevisionsBibTeX
Keywords: Privacy, Federated Learning, Gradient Leakage
TL;DR: We study detectability of malicious server attacks in federated learning, show that prior attacks are detectable, and propose SEER, a novel attack framework that reconstructs data from large batch sizes and is by design harder to detect.
Abstract: Malicious server (MS) attacks have scaled data stealing in federated learning to more challenging settings. However, concerns regarding client-side detectability of MS attacks were raised, questioning their practicality once they are publicly known. In this work, we thoroughly study the problem of detectability for the first time. We show that most prior MS attacks, which fundamentally rely on one of two key principles, are detectable by principled client-side checks. Further, we propose SEER, a novel attack framework that is less detectable by design, and able to steal user data from gradients even for large batch sizes (up to 512) and under secure aggregation. Our key insight is the use of a secret decoder, jointly trained with the shared model to disaggregate in a secret space. Our work is a promising first step towards more principled treatment of MS attacks, paving the way for realistic data stealing that can compromise user privacy in real-world deployments.
Submission Number: 92
Loading