pFedSAM: Secure Federated Learning Against Backdoor Attacks via Personalized Sharpness-Aware Minimization

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: personalized federated learing, backdoor attack, model security
Abstract: Federated learning is a distributed learning paradigm that allows clients to perform collaboratively model training without sharing their local data. Despite its benefit, federated learning is vulnerable to backdoor attacks where malicious clients inject backdoors into the global model aggregation process so that the resulting model will misclassify the samples with backdoor triggers while performing normally on the benign samples. Existing defenses against backdoor attacks either are effective only under very specific attack models or severely deteriorate the model performance on benign samples. To address these deficiencies, this paper proposes pFedSAM, a new federated learning method based on partial model personalization and sharpness-aware training. Theoretically, we analyze the convergence properties of pFedSAM for the general non-convex and heterogeneous data setting. Empirically, we conduct extensive experiments on a suite of federated datasets and show the superiority of pFedSAM over state-of-the-art robust baselines in terms of both robustness and accuracy.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4769
Loading