Successive Interference Cancellation Based Defense for Trigger Backdoor in Federated Learning

Published: 01 Jan 2023, Last Modified: 13 Nov 2024ICC 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) provides a decentralized training mechanism that ensures users' data privacy. However, FL is vulnerable to backdoor attacks, a type of data poisoning attack. The adversaries tampered with the local models by injecting a trigger into a subset of training data. After the aggregation process, the global model would be poisoned and mispredict the input images that injected a trigger designed by an adversary. Unlike the existing defense methods attempting to identify and remove the abnormal model updates on the aggregation step, this paper proposes a Successive Interference Cancellation-based Defense Framework (SICDF) to detect and eliminate the trigger during model inference. SICDF first employs Explainable AI to infer where the trigger is and then uses image processing skills to eliminate potential trigger effects. Experiment results show that SICDF can effectively recover the poisoned data while only slightly reducing the accuracy of the clean model and benign data.
Loading