From Underspecified Queries to Clear Research Scope: Context-Aware Planning for Deep Research

ACL ARR 2026 January Submission2982 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep Research, Research Planning, AI Agents, Resource Efficiency
Abstract: Current deep research typically generate research brief by expanding user queries using large language model (LLM). This expansion-only approach often produces underspecified formulations that lack clear scope and contextual grounding, creating a boundary ambiguity challenge that leads to redundant exploration and inefficient tool usage. While existing methods rely on iterative user clarification, they often fail when users cannot provide precise domain constraints. In this paper, we propose that robust research planning requires proactively exposing implicit constraints, rather than relying solely on query expansion or user clarification. We introduce the Context-Aware Planning Framework (CAPF), which utilizes a pre-search phase to surface contextual signals and analyze boundary ambiguities. By integrating these insights into the planning, CAPF generates grounded research briefs that better constrain downstream exploration. Experiments across four complex fact-seeking tasks demonstrate that CAPF consistently outperforms recent deep research agents and agentic retrieval systems. Our analysis further reveals that exposing the agent to the inherent difficulties of a question during planning stage is a critical factor for achieving higher accuracy while simultaneously reducing resource consumption.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: Information Extraction, Information Retrieval and Text Mining, NLP Applications
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 2982
Loading