Evolutionary Multi-model Federated Learning on Malicious and Heterogeneous Data

Published: 2023, Last Modified: 13 Nov 2024ICDM (Workshops) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The federated learning system is vulnerable to data poisoning attacks from malicious clients, who often present heterogeneous data. Poisoning such heterogeneous data can lead to significant diversity in the local datasets. Traditional single-model federated learning struggles to effectively handle this diverse knowledge. To address this challenge, we propose an evolutionary multi-model federated learning framework called EMFL, which trains multiple models with diversity to learn from the diverse data effectively. Specifically, we introduce a gradient-based particle swarm optimization approach, which incorporates gradient information from the models and utilizes a multi-elite strategy to achieve quick convergence and promote model diversity. Furthermore, we propose a client-matching strategy based on cosine similarity of multi-model updates, allowing for analyzing the similarity of distribution patterns in diverse data and matching clients with concentrated distributions, thus helping to prevent excessive models from converging to malicious data and causing extreme model diversity. The proposed EMFL employs multiple lightweight models instead of a large architecture one to overcome training challenges for clients with limited computing power. Experimental results on nine datasets demonstrate consistent outperformance of our proposed algorithm compared to its counterparts.
Loading