DFL$^2$G: Dynamic Agnostic Federated Learning with Learngene

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Low-cost Communication, Learngene
TL;DR: We propose DFL$^2$G, a "Collaborating & Condensing & Initializing" dynamic federated learning framework inspired by Learngene, aiming to achieve low-cost communication, robust privacy protection and effective initialization of agnostic models.
Abstract: Dynamic agnostic federated learning is a promising research field where agnostic clients can join the federated system at any time to collaboratively construct machine learning models. The critical challenge is to securely and effectively initializing the models for these agnostic clients, as well as the communication overhead with the server when participating in the training process. Recent research usually utilizes optimized global model for initialization, which can lead to privacy leakage of the training data. To overcome these challenges, inspired by the recently proposed Learngene paradigm, which involves compressing a large-scale ancestral model into meta-information pieces that can initialize various descendant task models, we propose a \textbf{D}ynamic agnostic \textbf{F}ederated \textbf{L}earning with \textbf{L}earn\textbf{G}ene framework. The local model achieves smooth updates based on the Fisher information matrix and accumulates general inheritable knowledge through collaborative training. We employ sensitivity analysis of task model gradients to locate meta-information (referred to as \textit{learngene}) within the model, ensuring robustness across various tasks. Subsequently, these well-trained \textit{learngenes} are inherited by various agnostic clients for model initialization and interaction with the server. Comprehensive experiments demonstrate the effectiveness of the proposed approach in achieving low-cost communication, robust privacy protection, and effective initialization of models for agnostic clients.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9662
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview