Parallelism Meets Adaptiveness: Scalable Documents Understanding in Multi-Agent LLM Systems

Published: 17 Dec 2025, Last Modified: 19 Dec 2025WoMAPF OralEveryoneRevisionsCC BY 4.0
Keywords: Large Language Models, Multi-Agent Systems, Dynamic Task Routing, Bidirectional Feedback, Parallel Agent Evaluation
Abstract: Large language model (LLM) agents have shown increasing promise for collaborative task completion. However, existing multi-agent frameworks often rely on static workflows, fixed roles, and limited inter-agent communication, reducing their effectiveness in open-ended, high-complexity domains. This paper presents a multi-agent coordination framework that improves the accuracy of Large Language Models (LLMs) in complex financial document analysis. Unlike existing frameworks that rely on static routing or linear workflows, our approach introduces Parallel Agent Evaluation, a mechanism where multiple agents compete on high-ambiguity subtasks. A centralized evaluator scores these parallel outputs based on factuality and coherence to select the optimal result. We evaluate this architecture on SEC 10-K filings, demonstrating a 27% improvement in compliance accuracy and a 74% reduction in revision rates compared to standard static baselines. These results validate that structured competition and dynamic routing significantly reduce hallucinations in high-stakes document understanding.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 15
Loading