Keywords: Federated Learning, Reasoning, Uncertainty-aware, Large Language Models
TL;DR: We introduce FERA, a training-free framework for uncertainty-aware federated reasoning with large language models. FERA reduces computational and communication overhead while maintaining strong reasoning performance under heterogeneous data.
Abstract: Reasoning capabilities in large language models (LLMs) are critical for advancing beyond pattern recognition toward systematic problem-solving, logical inference, and reliable decision-making. Most reasoning LLM (rLLM) approaches rely on fine-tuning but are constrained by privacy barriers that prevent centralizing domain-specific reasoning data. Federated methods enable privacy-preserving collaboration but come with prohibitive computational and communication costs. To address these challenges, we propose the Uncertainty-aware Federated Reasoning (FERA) framework that eliminates costly training and reduces communication overhead while preserving strong reasoning performance under heterogeneous data. FERA introduces Uncertainty-Aware Aggregation, using iterative server–client collaboration where uncertainty estimates are progressively refined across communication rounds to guide response generation. We establish theoretical convergence guarantees and validate our framework through extensive experiments, demonstrating that FERA achieves state-of-the-art reasoning accuracy with substantially greater efficiency than existing approaches.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 23806
Loading