SpotlightRAG: Enhancing Factual Accuracy with Position-Aware Span Selection

ICLR 2026 Conference Submission18992 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RAG, LLM, Fatual Accuracy
TL;DR: Enhancing Factual Accuracy with Position-Aware Span Selection
Abstract: Retrieval-Augmented Generation (RAG) enhances LLMs with external knowledge, but current methods face key limitations. Most solutions operate at a coarse passage or sentence level, indiscriminately concatenating retrieved text, which introduces noise, overlooks decisive sub-sentential phrases, and is susceptible to positional bias where evidence is lost in the middle of long contexts. To overcome these challenges, we propose SpotlightRAG, an inference-time framework that enhances factual accuracy through precise, span-level context selection and explicit relevance signaling. SpotlightRAG employs a position-aware scoring mechanism to identify and weight critical text spans, directly countering positional bias. It then uses novel retrieval-aware prefix tokens to explicitly annotate the relevance of each span for the generator, providing fine-grained, interpretable control without model retraining. Extensive experiments on four benchmarks—PopQA, TriviaQA, Natural Questions, and MultiHopQA—demonstrate that SpotlightRAG consistently outperforms state-of-the-art baselines, including InstructRAG, RankRAG, and In-Context RALM, improving accuracy over strong baselines by 2.1% on PopQA and 1.2% on the challenging MultiHopQA dataset. An anonymized implementation is available at https://anonymous.4open.science/r/SpotlightRAG-5F6A/.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 18992
Loading