Keywords: Federated Learning, Data Poisoning, Adversarial Attack
TL;DR: This paper proposes a novel defense mechanism for Federated Learning systems, aimed at mitigating data poisoning attacks in autonomous vehicles. By combining anomaly detection with robust aggregation techniques.
Abstract: Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients. The ability to achieve collaborative learning from multiple parties containing an extensive volume of data while providing the essence of data privacy made it an attractive solution to address numerous challenges in sensitive data-driven fields such as autonomous vehicles (AVs). However, its decentralized nature exposes it to security threats, such as evasion and data poisoning attacks, where malicious participants can compromise training data. This paper addresses the challenge of defending federated learning systems against data poisoning attacks specifically focusing on data-flipping techniques in AVs by proposing a novel defense mechanism that combines anomaly detection with robust aggregation techniques. Our approach employs statistical outlier detection and model-based consistency checks to filter out compromised updates before they affect the global model. Experiments on benchmark datasets show that our method significantly enhances robustness by preventing nearly 15\% of accuracy drop for our global model when confronted with a malicious participant and reduction the the attack success rate even when dealing with 20\% of poisoning level. These findings provide a comprehensive solution to strengthen FL systems against adversarial threats.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8524
Loading