Trusted Aggregation (TAG): Model Filtering Backdoor Defense In Federated LearningDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023FL-NeurIPS 2022 PosterReaders: Everyone
Keywords: federated learning, backdoor attack, robust aggregation
TL;DR: Our proposed method, Trusted Aggregation, is a novel approach to preventing backdoor attacks within the Federated Learning framework.
Abstract: Federated Learning is a framework for training machine learning models from multiple local data sets without access to the data. A shared model is jointly learned through an interactive process between server and clients that combines locally learned model gradients or weights. However, the lack of data transparency naturally raises concerns about model security. Recently, several state-of-the-art backdoor attacks have been proposed, which achieve high attack success rates while simultaneously being difficult to detect, leading to compromised federated learning models. In this paper, motivated by differences in the output layer distribution between models trained with and without the presence of backdoor attacks, we propose a defense method that can prevent backdoor attacks from influencing the model while maintaining the accuracy of the original classification task.
Is Student: Yes
4 Replies

Loading