A Dual-Defense Self-balancing Framework Against Bilateral Model Attacks in Federated Learning

Xiang Wu, Aiting Yao, Shantanu Pal, Frank Jiang, Xuejun Li, Jia Xu, Chengzu Dong, Xuefei Chen, Xiuyi Zhang, Xiao Liu

Published: 01 Jan 2025, Last Modified: 12 Nov 2025CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: With the rapid expansion of Artificial Intelligence (AI) services, smart devices generate a large amount of user data at the edge network, which urgently needs to be protected while effectively extracting information. Federated learning (FL) is an important technology for handling dispersed data and strict privacy requirements in this context. However, the security threats caused by model inversion attacks and poisoning attacks can affect the mutual trust between the client and server. Yet, for these two types of attacks, the existing defense mechanisms are contradictory in terms of whether the model parameters are publicly disclosed. In addition, the data distribution of the clients is imbalanced which will increase the bias of model, reducing its practicality. To address this issue, this study proposes a dual defense self-balanced federated learning (DDSFL) framework, aiming to introduce a novel lightweight defense mechanism during the model parameter aggregation stage, combating these two types of attacks simultaneously by applying differential privacy and adjusting learning rates. In addition, this method also integrates a middleware-based reordering algorithm to enhance the robustness of the framework. Experimental results show that DDSFL effectively improves the ability to resist imbalanced data, forged data, and malicious behavior, significantly enhancing the generalization performance and security of the FL system.
Loading