Multi-scale Granularity Alignment for Multi-hop Retrieval-Augmented Generation

ACL ARR 2026 January Submission2467 Authors

03 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Retrieval-Augmented Generation, LLM, Granularity alignment
Abstract: Although query decomposition benefits multi-hop RAG, the entanglement of mixed granularities within un-decomposed knowledge hinders evidence alignment for attention-constrained models. To address this, we introduce MGA-RAG, a novel RAG framework that synergistically combines granularity decoupling with multi-scale granularity alignment. Specifically, MGA-RAG pioneers a multi-scale alignment strategy that reconstructs retrieved documents into representations of varying granularities and conducts scale-aware alignment with decomposed queries. This strategy shifts the focus to the alignment process prior to the generation phase, enabling the generation model to concentrate on key evidence while reducing redundancy and noise. Meanwhile, by fusing the generation results from multiple scale-specific views, MGA-RAG promotes a balanced attention to evidence across granularities. Furthermore, we introduce a Self-Correcting Decoupling Agent to audit the granularity decoupling process, mitigating error propagation caused by inaccurate granularity decomposition. Experimental results on three multi-hop QA datasets demonstrate that MGA-RAG significantly outperforms existing methods. In-depth analysis further validates the effectiveness of the proposed approach.
Paper Type: Long
Research Area: Information Extraction and Retrieval
Research Area Keywords: passage retrieval; dense retrieval
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 2467
Loading