Abstract: Position bias in LLMs leads to difficulty in accessing information retrieved from the retriever, thus downgrading the effectiveness of RAG approaches in open-question answering. Recent studies reveal that this bias is related to disproportional attention across the context. In this work, we examine directing the LLMs to allocate more attention towards a selected segment of the context through prompting, such that the shortage of attention can be compensated. We find that language models do not have relative position awareness of the context but can be directed by promoting instruction with an exact document index. Our analysis contributes to a deeper understanding of position bias in LLMs and provides a pathway to mitigate this bias by instruction, thus benefiting LLMs in locating and utilizing relevant information from retrieved documents in RAG applications.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Interpretability and Analysis of Models for NLP, Language Modeling
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 3476
Loading