Keywords: Personalized Federated Learning, Parameter-Efficient Fine-Tuning, Low-Rank Adaptation, Singular Value Decomposition, LLM
TL;DR: This paper proposes FedKLS, a KL-guided spectral adaptation framework for personalized federated learning that leverages low-rank SVD-based adapters to improve performance and reduce communication costs in extremely non-IID settings.
Abstract: Federated learning faces two key challenges: handling non-IID client distributions and reducing communication costs in adapting large models. To address these issues, we propose FedKLS, a framework that combines KL-divergence-based personalization with low-rank SVD-based adaptations. FedKLS chooses spectral components in a dynamical manner by mapping the heterogeneity of client distribution to the singular value spectrum, then builds specialized LoRA-style adapters, which allow aggregation at scale and client-specific specialization. Extensive experiments on 20NewsGroup and Banking77 with DistilBERT and Qwen backbones show that FedKLS achieves competitive performance compared to state-of-the-art parameter-efficient fine-tuning baselines, including LoRA, PiSSA, MiLoRA, and full fine-tuning. In highly non-IID settings ($\alpha = 0.01$), FedKLS improves F1-score by up to 11--12\% and reduces total communication cost by about 3x over PiSSA, achieving the best trade-off between personalization and scalability. These results demonstrate the effectiveness of KL-guided spectral adaptation in federated fine-tuning of large models. Our implementation is available at: https://anonymous.4open.science/r/FedKLS.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 18316
Loading