Collaborative and Efficient Personalization with Mixtures of Adaptors

Published: 11 Feb 2025, Last Modified: 06 Mar 2025CPAL 2025 (Proceedings Track) PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, personalization, multi-task learning, clustering, parameter-efficient
TL;DR: We propose a method for solving personalized federated learning problems, where we personalize "groups of similar clients" by learning a mixture of low-rank adaptors on top of the base model, which is collaborative and memory-efficient.
Abstract: Heterogenous data is prevalent in real-world federated learning. We propose a parameter-efficient framework, Federated Low-Rank Adaptive Learning (FLoRAL), that allows clients to personalize in groups by mixing between low-rank adaptors, where the mixtures are client-specific. FLoRAL is a model parameterization that casts personalized federated learning as a multi-task learning problem, with weight sharing as an implicit regularizer. It is memory-efficient, as the personalized parameters (i.e., base model + adaptors) are all federated. Our results show that FLoRAL can generalize better than a mixture of full models when data are scarce. It can also consistently personalize better than models with a locally tuned adaptor per client. This demonstrates the benefits of “federated personalization” and its robustness against overfitting. We derive the convergence rates and show theoretically that FLoRAL can lead to better variance reduction of the base model's gradients.
Submission Number: 27
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview