Abstract: In multi-user task-oriented semantic communication, existing deep learning (DL)-based approaches may fail to achieve high scalability with satisfactory performance, especially in the context of providing semantic services in diverse downstream semantic-aware tasks over heterogeneous networks. Recent advances in foundation models (FMs), which display excellent knowledge representation capabilities, have expanded the boundaries of what is possible with DL-based semantic communication. In this paper, we establish an FM-based multi-user semantic communication framework via personalized federated parameter-efficient finetuning, in which each user equipment is provided with personalized semantic services in diverse downstream tasks. Our proposed approach achieves scalable and generalized performance in a computation-efficient manner. We conduct experiments with state-of-the-art FMs (i.e., LLaMA-7B and CLIP) across diverse system configurations to demonstrate the efficacy of the proposed approach.
Loading