Enhancing Model Poisoning Attacks to Byzantine-Robust Federated Learning via Critical Learning Periods
Abstract: Most existing model poisoning attacks in federated learning (FL) control a set of malicious clients and share a fixed number of malicious gradients with the server in each FL training round, to achieve a desired tradeoff between the attack impact and the attack budget. In this paper, we show that such a tradeoff is not fundamental and an adaptive attack budget not only improves the impact of attack <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 1</AltText><File name="raid2024-25-inline1" type="svg"/></Formula> but also makes it more resilient to defenses. However, adaptively determining the number of malicious clients that share malicious gradients with the central server in each FL training round has been less investigated. This is due to the fact that most existing model poisoning attacks mainly focus on FL optimization itself to maximize the damage to the global model, and largely ignore the impact of the underlying deep neural networks that are used to train FL models. Inspired by recent findings on critical learning periods (CLP), where small gradient errors have irrecoverable impact on model accuracy, we advocate CLP augmented model poisoning attacks <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 2</AltText><File name="raid2024-25-inline2" type="svg"/></Formula>-CLP in this paper. <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 3</AltText><File name="raid2024-25-inline3" type="svg"/></Formula>-CLP merely augments an existing model poisoning attack <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 4</AltText><File name="raid2024-25-inline4" type="svg"/></Formula> with an adaptive attack budget scheme. Specifically, <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 5</AltText><File name="raid2024-25-inline5" type="svg"/></Formula>-CLP inspects the changes in federated gradient norms to identify CLP and adaptively adjusts the number of malicious clients that share their malicious gradients with the server in each round, leading to dramatically improved attack impact compared to <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 6</AltText><File name="raid2024-25-inline6" type="svg"/></Formula> by up to 6.85 ×, with a smaller attack budget. This in turn improves the resilience of <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 7</AltText><File name="raid2024-25-inline7" type="svg"/></Formula> by up to 2 ×. Since <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 8</AltText><File name="raid2024-25-inline8" type="svg"/></Formula>-CLP is orthogonal to the attack <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 9</AltText><File name="raid2024-25-inline9" type="svg"/></Formula>, it also crafts malicious gradients by solving a difficult optimization problem. To tackle this challenge and based on our understandings of <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 10</AltText><File name="raid2024-25-inline10" type="svg"/></Formula>-CLP, we further relax the inner attack subroutine <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 11</AltText><File name="raid2024-25-inline11" type="svg"/></Formula> in <Formula format="inline"><TexMath><?TeX $\mathcal {A}$?></TexMath><AltText>Math 12</AltText><File name="raid2024-25-inline12" type="svg"/></Formula>-CLP and design GraSP, a lightweight CLP augmented similarity-based attack. We show that GraSP not only is more flexible but also achieves an improved attack impact compared to the strongest of existing model poisoning attacks.
Loading