Federated Bilevel Learning Against Model Poisoning Attacks

Published: 2026, Last Modified: 23 Jan 2026IEEE Trans. Netw. 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The hierarchical structure of distributed bilevel optimization (DBO) makes it more vulnerable to backdoor attacks than single-level optimization. Existing defense methods primarily focus on single-level settings, overlooking the cross-level propagation and amplification effects of attack impact in DBO. In this paper, we propose a defense mechanism $\textsf {BGFBL}$ for distributed bilevel optimization, which implements defenses for both the inner and outer layers, respectively. The inner-level defense corrects client gradient updates, while the outer-level defense introduces gradient clipping before aggregation to mitigate the impact of malicious updates and restrict attack spread. To the best of our knowledge, $\textsf {BGFBL}$ is the first theoretically guaranteed algorithm for distributed bilevel learning against model poisoning attacks. We demonstrate that $\textsf {BGFBL}$ achieves an asymptotically optimal convergence rate of $\mathcal {O}\left ({{\frac {1}{\sqrt {nK}}}}\right)$ , where n is the number of clients and K is the global maximum iteration number. Extensive experimental results demonstrate the effectiveness of our approach in defending against model poisoning attacks. Our method achieves a $30\%-50\%$ improvement in misclassification confidence compared to baselines.
Loading