MIRAGE: Multi-hop Reasoning with Ambiguity Evaluation for Illusory Questions

ICLR 2026 Conference Submission17594 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Ambiguity, Agent, Dataset, Multi-hop
TL;DR: This paper introduces MIRAGE, a 1,142-example benchmark for multi-hop ambiguous QA, and shows that a multi-agent approach (CLARION) outperforms current RAG-based baselines.
Abstract: Real-world multi-hop question answering (QA) often involves ambiguity that is inseparable from the reasoning process itself. This creates a distinct challenge where multiple reasoning paths emerge from a single question, each requiring independent ambiguity resolution. Because each sub-question can be ambiguous, the model must resolve ambiguity at every step; thus, answering a single question requires handling multiple layers of ambiguity throughout the reasoning chain. We find that current large language models (LLMs) struggle in this setting, typically exploring incorrect reasoning paths and producing incomplete answers. To facilitate research on multi-hop ambiguity, we introduce MIRAGE (Multi-hop Reasoning with Ambiguity Evaluation for Illusory Questions), a benchmark designed to analyze and evaluate this challenging intersection of ambiguity interpretation and multi-hop reasoning. MIRAGE contains 1,142 high-quality examples of ambiguous multi-hop questions, categorized under a taxonomy of syntactic, general, and semantic ambiguity, and curated through a rigorous multi-LLM verification pipeline. Our experiments reveal that even state-of-the-art models struggle on MIRAGE, confirming that resolving ambiguity combined with multi-step inference is a distinct and significant challenge. To establish a robust baseline, we propose CLARION (Clarifying Ambiguity with a Reasoning and Instruction), a multi-agent framework that outperforms existing approaches on MIRAGE and points toward more adaptive, ambiguity-aware reasoning systems.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 17594
Loading