Abstract: The emerging field of Federated Learning (FL) is
reshaping privacy-preserved data analysis and decision support
mechanisms within several critical infrastructure (CIs) sectors
such as autonomous transportation, energy, and healthcare. To
shield sensitive operational and client data from privacy attackers, Differential Privacy (DP) has been proposed to integrate
on top of the FL process. Yet, we identify that integrating
Gaussian noise for achieving DP guarantee can inadvertently
create a new vector for differential model poisoning attacks in
FL. Moreover, exploiting the variance in Gaussian noise enables
attackers to camouflage their activities within the legitimate noise
of the system, a significant yet largely overlooked security flaw in
the differentially private federated learning (DPFL) framework.
Addressing this research gap, we introduce a novel adaptive
model poisoning through episodic loss memorization (α-MPELM)
technique. This method enables attackers to dynamically inject
adversarial noise into the differentially private local model
parameters. The technique has a dual purpose: hindering the
optimal convergence of the global FL model and simultaneously
avoiding detection by the anomaly detectors. Our evaluation
of the α-MPELM attack reveals its capability to deceive Norm,
Accuracy, and Mix anomaly detection algorithms, surpassing the
conventional random malicious device (RMD) attacks with attack
accuracy improvements of 6.8%, 12.6%, and 13.8%, respectively.
Additionally, we introduce a reinforcement learning-based DP
level selection strategy, rDP, as an effective countermeasure
against α-MPELM attack. Our empirical findings confirm that
this defense mechanism steadily progresses to an optimal policy.
Loading