Federated Inference for Heterogeneous LLM Communication and Collaboration

Published: 26 Jan 2026, Last Modified: 26 Jan 2026AAAI 2026 Workshop on ML4Wireless PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model, Federated learning, Collaborative inference, Cache communication
Abstract: Given the limited performance and efficiency of on-device Large Language Models (LLMs), the collaborations between multiple LLMs enable desirable performance enhancements, in which data, tokens, and model weights could be shared across LLMs. This process is constrained by task-oriented QoS demands, privacy requirements, and inherent system heterogeneity. In view of the above challenge and to fully exploit the on-device inference capabilities, we present a novel federated inference framework in this position paper, termed federated refinement \texttt{FedRefine}. This framework presents a new paradigm for heterogeneous LLMs collaboratively performing inference with communicating KV caches in a privacy-preserving manner. Some numerical results are provided to highlight the superiority of \texttt{FedRefine}. Several interesting topics are also highlighted for future research. By exploring the LLM-native communications, we wish to provide a new paradigm for this broad area.
Submission Number: 30
Loading