Keywords: Federated Computation, Large Language Models, Multi-agent, Reasoning
Abstract: LLM-powered agents often reason from scratch given a new problem instance and lack effective mechanisms to transfer learned skills to other agents. We propose the first federated-like framework, \emph{federation over text} (FoT), where multiple agents solving different tasks collectively learn a library of metacongnitive insights by federating local reasoning processes iteratively. Instead of federation over gradients (i.e., distributed training), FoT operates at the \textbf{semantic level}. Each agent does local thinking and reflection on their specific tasks and shares reasoning traces with a server, which aggregates them into a cross-task (and cross-domain) insight library that agents can leverage to improve performance on new reasoning tasks. Experiments show that FoT improves reasoning ability and efficiency on mathematical problem-solving and machine learning insight discovery. For math problems, we achieve up to \textbf{63\%} improvement in accuracy and \textbf{28\%} reduction in generated tokens across benchmarks.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 96
Loading