Personalized Federated Fine-tuning for Heterogeneous Data: a Two-Level Low Rank Adaptation Approach

ICLR 2025 Conference Submission12715 Authors

28 Sept 2024 (modified: 22 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Low Rank Adaptation, Heterogenoeus Data, Language Model, Foundation Model
TL;DR: This paper introduces a two-level adaptation approach for federated foundation model fine-tuning with heterogeneous data.
Abstract: We study the personalized federated fine-tuning task with heterogeneous client data in the context of foundation models, where clients collaboratively fine-tune a foundation model (e.g., BERT, GPT) without sharing their local data, achieving personalized models simultaneously. While recent efforts have applied parameter-efficient fine-tuning techniques like low-rank adaptation (LoRA) or training prompts in federated settings, they often overlook data heterogeneity and model personalization. The primary challenge is that a single common adapter or prompt learner may not suffice for the diverse data of all clients. To address this issue, we propose PF2LoRA, a new personalized federated fine-tuning algorithm based on a novel \emph{ two-level low rank adaptation framework} on top of LoRA. Given the pretrained foundation model whose weight is frozen, our algorithm aims to learn two levels of adaptation simultaneously: the first level aims to learn a common adapter for all clients, while the second level fosters individual client personalization. This framework explicitly accommodates variations in adapter matrix ranks across clients and introduces minimal additional memory overhead, as the second-level adaptation comprises a small number of parameters compared to the first level. Our experiments on natural language understanding and generation tasks demonstrate that PF2LoRA significantly outperforms existing federated fine-tuning methods.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12715
Loading