PluriHopRAG: Exhaustive, Recall-Sensitive QA Through Corpus-Specific Document Structure Learning

ACL ARR 2026 January Submission6120 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: pluri-hop QA, retrieval-augmented generation, RAG, exhaustive retrieval, recall-sensitive QA, document aggregation, repetitive corpora, report analysis, query decomposition, document filtering, cross-encoder reranking, multi-document reasoning, technical reports, wind energy data, benchmark dataset, PluriHopWIND, Loong benchmark, long-context QA, information retrieval, large language models
Abstract: Retrieval-Augmented Generation (RAG) has been used in question answering (QA) systems to improve performance when relevant information is in one (single-hop) or multiple (multi-hop) passages. However, many real life scenarios (e.g. dealing with financial, legal, medical reports) require checking all documents for relevant information without a clear stopping condition. We term these pluri-hop questions, and formalize them by 3 conditions - recall sensitivity, exhaustiveness, and exactness. To study this setting, we introduce PluriHopWIND, a multilingual diagnostic benchmark of 48 pluri-hop questions over 191 real wind-industry reports, with high repetitiveness to reflect the challenge of distractors in real-world datasets. Naive, graph-based, and multimodal RAG methods only reach up to 40\% statement-wise F1 on PluriHopWIND. Motivated by this, we propose PluriHopRAG, which learns from synthetic examples to decompose queries according to corpus-specific document structure, and employs a cross-encoder filter at the document level to minimize costly LLM reasoning. We test PluriHopRAG on PluriHopWIND and the Loong benchmark built on financial, legal and scientific reports. On PluriHopWIND, our method shows 18-52\% F1 score improvement across base LLMs, while on Loong, we show 33\% improvement over long-context reasoning and 52\% improvement over naive RAG.
Paper Type: Long
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: multihop QA, benchmarking, multilingual corpora, NLP datasets, financial/business NLP, legal NLP, fine-tuning, retrieval-augmented generation, document representation, document-level extraction
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English, German
Submission Number: 6120
Loading