Abstract: Federated learning (FL) in Internet of Things (IoT) applications facilitates the collaborative training of a global model across distributed devices with a server. Despite its potential, the distributed nature and vulnerability of IoT devices render FL susceptible to Byzantine attacks. Existing approaches to counter these attacks are often impractical in real-world IoT scenarios, mainly due to the challenges posed by nonindependent identically distributed (non-IID) data and the high-dimensional model common in IoT devices. To address these challenges, we propose Guard-FL, an efficient and robust aggregation mechanism assisted by uniform manifold approximation and projection (UMAP) for FL. Guard-FL is designed to enhance the performance of the global model in non-IID data environments without compromising defense capabilities. Specifically, it utilizes UMAP to capture nonlinear features among high-dimensional local models. Based on these features, robust regression and unsupervised clustering techniques are applied to effectively detect and remove attackers from local model updates. Subsequently, the server employs information stored in weights (IIWs) to evaluate and aggregate the remaining divergent model updates, thus significantly improving the global model’s performance. To validate the efficacy of Guard-FL, we provide a theoretical analysis of its convergence properties. Our experiments demonstrate that Guard-FL surpasses existing state-of-the-art solutions, achieving up to 96% accuracy in detecting malicious clients on non-IID CIFAR-10 data sets under various Byzantine attack scenarios. The implementation code is provided at https://github.com/XidianNSS/Guard-FL.git.
External IDs:dblp:journals/iotj/SongLCZSS24
Loading