DuQUAD: A Dual-View Framework with Quality-Aware Evidence Pruning for Multi-Document Question Answering

ACL ARR 2026 January Submission5212 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Retrieval-Augmented Generation, Multi-Agent, Multi Document QA, Long Context Reasoning
Abstract: Large Language Models with Retrieval-Augmented Generation perform well on knowledge-intensive question answering, but often fail to utilize relevant evidence in long-context multi-document settings due to positional bias, known as the Lost-in-the-Middle phenomenon. Existing mitigation strategies, including document reordering and attention steering, are fragile in noisy settings, where boundary bias and spurious relevance suppress mid-context evidence. We propose DuQUAD, a multi-document QA framework that mitigates positional bias via dual-view reasoning and quality-aware evidence pruning. It combines a Local Agent guided by a Structural Fusion Score with a Global Agent over the full context, enabling complementary local recovery and global coverage. Candidate evidence from both agents is filtered by explicit quality scoring and refined at the sentence level to suppress noise. DuQUAD consistently outperforms strong Lost-in-the-Middle mitigation baselines, including recent multi-agent and context optimization methods, achieving up to 13.8% improvement in answer accuracy and up to 9.4% improvement in golden document recall.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: Generation, Information Extraction, Language Modeling, Question Answering, Semantics: Lexical and Sentence-Level
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 5212
Loading