When Large Models Meet Generalized Linear Models: Hierarchy Statistical Network for Secure Federated Learning
Keywords: Federated Learning, Large Pre-trained Models, Generalized Linear Models, Security in Federated Learning, Poisoning Attacks, Deviance Residuals
TL;DR: We propose HStat-Net to refine feature representation space, enabling integration of GLMs with large pre-trained models in FL. Based on this, we design FedRACE, detecting poisoning attacks using GLMs' deviance residuals.
Abstract: Large pre-trained models perform well on many Federated Learning (FL) tasks. Recent studies have revealed that fine-tuning only the final layer of large pre-trained models can reduce computational and communication costs while maintaining high performance. We can model the final layer, which typically performs a linear transformation, as a Generalized Linear Model (GLM). GLMs offer advantages in statistical modeling, especially for anomaly detection. Leveraging these advantages, GLM-based methods can be utilized to enhance the security of the fine-tuning process for large pre-trained models. However, integrating GLMs with large pre-trained models in FL presents challenges. GLMs rely on linear decision boundaries and struggle with the complex feature representation spaces from pre-trained models. To address this, we introduce the Hierarchy Statistical Network (HStat-Net). HStat-Net refines the spaces to make them more discriminative, allowing GLMs to work effectively in FL. Based on HStat-Net, we further develop FedRACE to detect poisoning attacks using deviance residuals from GLMs. We also provide a theorem to support FedRACE’s detection. Extensive experiments conducted on CIFAR-100, Food-101, and Tiny ImageNet demonstrate that FedRACE significantly outperforms existing state-of-the-art defense algorithms.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5051
Loading