Personalized Federated Adaptation via Prototype–Text Contrastive Alignment

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Personalized federated learning, vision–language models, few-shot learning
Abstract: Personalized federated learning (PFL) aims to share global knowledge while tailoring models to heterogeneous clients. However, traditional PFL methods face two key challenges: (i) reliance on aggregation-based model updates to form shared information makes training sensitive to data heterogeneity; and (ii) repeated on-client training and transmission of full model parameters or gradients yield substantial computational and communication overhead. Inspired by vision–language models (VLMs) such as CLIP, we pursue an alternative paradigm that adapts a frozen backbone through lightweight modules that leverage language-anchored priors. Specifically, we propose Personalized Federated Adaptation via Prototype–Text Contrastive Alignment (FedPACT), which treats a client-specific personalized prototype cache and a shared text head as the only trainable and communicated components. Clients update only their prototypes to fit local distributions, while the server refines the shared text head by contrastively aligning the text embeddings with personalized prototypes. Our theoretical analysis shows that the shared text head improves convergence of the personalized prototype cache by enlarging the prototype–text margin. Experiments demonstrate that FedPACT achieves superior personalized and global performance over state-of-the-art methods.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 11651
Loading