FedCRAP: Federated Critical-Region-Aware Perturbations for Refined Privacy-Preserving Federated Learning
Keywords: Federated Learning, Privacy Preservation, Gradient Inversion Attacks, Utility-Privacy Tradeoff
Abstract: Federated Learning (FL) facilitates collaborative model training across a network of decentralized clients, enabling the development of global models without requiring raw data exchange. This approach preserves data privacy and security by keeping data localized on individual devices, but remains vulnerable to gradient inversion attacks. Existing defense mechanisms rely on global noise injection, which not only causes excessive utility loss or computational overhead but also fails to adequately protect sensitive information requiring additional emphasis. Intensified global perturbations to protect these local sensitive areas can compromise the overall utility of the image. This issue is particularly pronounced in sparse medical imaging data, where critical features are localized in specific regions.
To address this challenge, we propose Federated Critical-Region-Aware Perturbations (FedCRAP), a novel defense framework that leverages gradient-guided sparsity patterns. FedCRAP strategically injects noise into task-critical regions identified by high gradient magnitudes, aligning perturbations with the intrinsic sparsity of medical imaging data.
By integrating domain-specific sparsity awareness, FedCRAP achieves a favorable balance between privacy preservation and model performance. This provides a finer and more specific noise protection strategy, making it particularly effective.
Extensive experiments across various datasets, including sparse medical datasets, demonstrate that FedCRAP preserves model accuracy while significantly reducing privacy leakage risks. It also shows clear superiority over previous state-of-the-art (SoTA) methods for privacy-preserving federated learning.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15547
Loading