Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: In federated learning with local differential privacy (LDP), we find that membership inference risks persist, as our theoretical analysis and practical evaluations reveal privacy concerns.
Abstract: Federated Learning (FL) enables collaborative learning among clients via a coordinating server while avoiding direct data sharing, offering a perceived solution to preserve privacy. However, recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data. While local differential privacy (LDP) is widely regarded as a gold standard for privacy protection in data analysis, most studies on MIAs either neglect LDP or fail to provide theoretical guarantees for attack success against LDP-protected data. To address this gap, we derive theoretical lower bounds for the success rates of low-polynomial-time MIAs that exploit vulnerabilities in fully connected or self-attention layers, regardless of the LDP mechanism used. We establish that even when data are protected by LDP, privacy risks persist, depending on the privacy budget. Practical evaluations on models like ResNet and Vision Transformer confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models' utility.
Lay Summary: Federated Learning (FL) is a popular method that allows devices (like smartphones) to train a shared machine learning model without directly sharing their personal data. This sounds private, but there are still risks—especially from a central server that might act dishonestly. One major threat is a membership inference attack (MIA), where an attacker tries to figure out if your data was used to train a model. To guard against this, many systems use Local Differential Privacy (LDP)—a technique that adds randomized noise to your data before it even leaves your device. LDP is supposed to protect your privacy, but this paper shows, both in theory and with real experiments, that even with LDP protection, privacy risks persists depending on the amount of noise added, and the amount of noise needed to truly protect data privacy might significantly impact model's utility.
Link To Code: https://github.com/GivralNguyen/FL-LDP-AMI
Primary Area: Social Aspects->Privacy
Keywords: Federated Learning, Differential Privacy, Membership Inference Attack
Submission Number: 8895
Loading