From Bias to Benefit: Place Good Documents in Good Positions

17 Sept 2025 (modified: 03 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: interpretability, generalization, open-domain QA
Abstract: Large language models (LLMs) exhibit a U-shaped positional bias in processing input information, characterized by heightened attention to tokens at the beginning and end of the prompt while ignoring information in the middle, also known as the Lost-in-the-Middle phenomenon. In this paper, we investigate the internal mechanisms underlying this phenomenon by analyzing how positional bias influences attention weights across both horizontal (input-level) and vertical (layer-level) dimensions of the model. Based on these findings, we propose U-shaped Placement, a strategy that leverages inherent positional bias of the model by assigning documents to positions that align with its attention pattern. By combining this placement strategy with the importance estimations of documents, effectively placing good documents in good positions, we enhance the model’s ability to utilize documents within two iterations. Experimental results demonstrate that our method consistently outperforms existing baselines across multiple models and datasets, indicating that leveraging positional bias can bring improved document utilization capability. Our codes are submitted with the paper and will be publicly available.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 8362
Loading