Gradient-Level Differential Privacy Against Attribute Inference Attack for Speech Emotion Recognition

Published: 01 Jan 2024, Last Modified: 18 Apr 2025IEEE Signal Process. Lett. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The Federated Learning (FL) paradigm for distributed privacy preservation is valued for its ability to collaboratively train Speech Emotion Recognition (SER) models while keeping data localized. However, recent studies reveal privacy leakage in the model sharing process. Existing differential privacy schemes face increasing inference attack risks as clients expose more model updates. To address these challenges, we propose a Gradient-level Hierarchical Differential Privacy (GHDP) strategy to mitigate attribute inference attacks. GHDP employs normalization to distinguish gradient importance, clipping significant gradients and filtering out sensitive information that may lead to privacy leaks. Additionally, increased random perturbations are applied to early model layers during backpropagation, achieving hierarchical differential privacy through layered noise addition. This theoretically grounded approach offers enhanced protection for critical information. Our experiments show that GHDP maintains stable SER performance while providing robust privacy protection, unaffected by the number of model updates.
Loading