everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Large language models (LLMs) have demonstrated remarkable performance across various real-world tasks. However, they often struggle to fully comprehend and effectively utilize their input contexts, resulting in responses that are hallucinated. This difficulty increases for contexts that are long or contain distracting information, which can divert LLMs from fully capturing essential evidence. To address this issue, many works use prompting to help LLMs comprehend contextual information more reliably. For instance, iterative prompting highlights key information in two steps that first ask the LLM to identify important pieces of context and then derive answers accordingly. However, textual prompting methods are constrained to highlighting key information implicitly in token space, which is often insufficient to fully steer the model's attention. To improve model reading comprehension, we propose SteerPrompt, a method that automatically identifies key contextual information and explicitly highlights it by steering an LLM's attention scores. Like prompting, SteerPrompt is applied at inference time and does not require changing any model parameters. Our experiments on open-book QA demonstrate that SteerPrompt effectively enables models to grasp essential contextual information, leading to substantially improved problem-solving performance, e.g., an average improvement of 7.95% for LLAMA3-70B-Instruct. Code will be publicly available.