Agentic Parallel Thinking for Deep Information Seeking

ACL ARR 2026 January Submission2340 Authors

02 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Information Seeking, Open-Domain Question Answering, LLM Agents, Parallel Thinking
Abstract: Parallel thinking broadens exploration and complements deep information-seeking (IS) agents, but in this setting it is hindered by redundant from-scratch rollouts and context-limited answer generation that cannot reliably integrate long-horizon trajectories. To address these issues, we propose ParallelMuse, a two-stage inference-only paradigm designed for deep IS agents. The first stage, Functionality-Specified Partial Rollout, partitions generated sequences into functional regions and performs uncertainty-guided path reuse and branching to enhance exploration efficiency. The second stage, Compressed Reasoning Aggregation, exploits reasoning redundancy to losslessly compress information relevant to answer derivation and synthesize a coherent final answer. Experiments across multiple open-source agents and benchmarks demonstrate up to 62% performance improvement with a 10--30% reduction in exploratory token consumption.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: open-domain QA, multihop QA, reasoning
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 2340
Loading