Abstract: Federated Learning (FL) is a distributed machine learning paradigm, which has recently been applied in Internet of Vehicles (IoVs), forming FL-IoV networks. In such real-world scenarios, poisoning attacks emerge as a non-negligible issue, in which malicious clients send corrupted updates to the central server to sabotage overall model performance. Numerous existing defenses against poisoning attacks fail under the IoV setting where data is strictly restricted on board, and are unable to detect hybrid attacks. To address these concerns, we present FLUK (protecting Federated Learning Utilizing Kullback-Leibler divergence), a detection framework against poisoning attacks in FL-IoV setting by detecting malicious clients. Our key insight is that existing attacks produce malicious local updates that derive from those of benign ones, resulting in a different distribution among these updates. The difference can be reflected by the Kullback-Leibler divergence between client updates in a single round and also between rounds. Specifically, we use a 2D KL divergence detection method combined with a cumulative reputation module to detect malicious clients. Experiments on FL-IoV tasks show that our method can achieve a detection accuracy of up to 98% and 96% under different single attacks and hybrid attacks, respectively. Our implementation of FLUK on autonomous delivery vehicles shows its effectiveness in real-world scenarios.
Loading