Chow–Liu Ordering for Long-Context Reasoning in Chain-of-Agents

Published: 03 Mar 2026, Last Modified: 10 Mar 2026ICLR 2026 Workshop MemAgentsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: long-context reasoning, chain-of-agents
TL;DR: Importance of chunk ordering in long context reasoning with chain-of-agents
Abstract: Sequential multi-agent reasoning frameworks such as $\textit{Chain-of-Agents (CoA)}$ handle long-context queries by decomposing inputs into chunks and processing them sequentially using LLM-based worker agents that read from and update a bounded shared memory. From a probabilistic perspective, CoA aims to approximate the conditional distribution corresponding to a model capable of jointly reasoning over the entire long context. CoA achieves this through a latent-state factorization in which only bounded summaries of previously processed evidence are passed between agents. The resulting bounded-memory approximation introduces a lossy information bottleneck, making the final evidence state inherently dependent on the order in which chunks are processed. In this work, we study the problem of chunk ordering for long-context reasoning. We use the well-known $\textit{Chow-Liu trees}$ to learn a dependency structure that prioritizes strongly related chunks. Empirically, we show that a $\textit{breadth-first}$ traversal of the resulting tree yields chunk orderings that reduce information loss across agents and consistently outperform both default document-chunk ordering and semantic score-based ordering in answer relevance and exact-match accuracy across three long-context benchmarks.
Submission Number: 107
Loading