TL;DR: FedPHA enables efficient federated personalization through a fixed-length global prompt and flexible local prompts.
Abstract: Federated Prompt Learning (FPL) adapts pre-trained Vision-Language Models (VLMs) to federated learning through prompt tuning, leveraging their transferable representations and strong generalization capabilities. Traditional methods often require uniform prompt lengths for federated aggregation, limiting adaptability to clients with diverse prompt lengths and distribution biases. In this paper, we propose **Fed**erated **P**rompt Learning for **H**eterogeneous Client **A**daptation (FedPHA), a novel framework that combines a fixed-length global prompt for efficient aggregation with local prompts of varying lengths to capture client-specific data characteristics. Additionally, FedPHA designs Singular Value Decomposition (SVD) based projection and bidirectional alignment to disentangle global conflicts arising from client heterogeneity, ensuring that personalized client tasks effectively utilize non-harmful global knowledge. This approach ensures that global knowledge improves model generalization while local knowledge preserves local optimization. Experimental results validate the effectiveness of FedPHA in achieving a balance between global and personalized knowledge in federated learning scenarios.
Lay Summary: Federated learning allows devices like smartphones, hospitals, or schools to train AI models together without sharing sensitive data. But a key challenge is that every device has different amounts and types of data, and forcing them to follow the same learning process can lead to poor performance.
Our work introduces FedPHA, a method designed to address this problem by balancing shared learning with personalization. It gives each device a common "global prompt" to ensure consistent collaboration, while also allowing it to have its own "local prompt" that better fits its unique data. We also designed a mathematical technique to ensure these global and local prompts don’t conflict, helping devices learn effectively from both shared and personalized knowledge.
This approach allows AI systems to benefit from diverse data sources, improving overall learning while ensuring that each device’s unique needs are respected.
Link To Code: https://github.com/CYFang6/FedPHA
Primary Area: Deep Learning->Large Language Models
Keywords: Federated Prompt Learning, Domain Adaptation, Model Heterogeneity, Singular Value Decomposition
Submission Number: 4565
Loading