A robust federated learning client selection with combinatorial data class representations and data augmentation

26 Sept 2024 (modified: 21 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Client Selection, Backdoor Defense, Data Augmentation, Representation Learning.
TL;DR: A robust federated learning client selection with combinatorial data class representations and data augmentation
Abstract: The federated learning (FL) client selection scheme can effectively mitigate global model performance degradation caused by the random aggregation of clients with heterogeneous data. Simultaneously, research has exposed FL's susceptibility to backdoor attacks. However herein lies the dilemma, traditional client selection methods and backdoor defenses stand at odds, so their integration is an elusive goal. To resolve this, we introduce Grace, a resilient client selection framework blending combinational class sampling with data augmentation. On the client side, Grace first proposes a local model purification method, fortifying the model's defenses by bolstering its innate robustness. After, local class representations are extracted for server-side client selection. This approach not only shields benign models from backdoor tampering but also allows the server to glean insights into local class representations without infringing upon the client's privacy. On the server side, Grace introduces a novel representation combination sampling method. Clients are selected based on the interplay of their class representations, a strategy that simultaneously weeds out malicious actors and draws in clients whose data holds unique value. Our extensive experiments highlight Grace's capabilities. The results are compelling: Grace enhances defense performance by over 50\% compared to state-of-the-art (SOTA) backdoor defenses, and, in the best case, improves accuracy by 3.19\% compared to SOTA client selection schemes. Consequently, Grace achieves substantial advancements in both security and accuracy.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6457
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview