Keywords: Probing, Understanding high-level properties of models, Applications of interpretability
TL;DR: Bug Attention Probe (BAP) leverages LLM attention to localize bugs without tests or labels, beating baselines and GPT-4o while being far more efficient.
Abstract: Ensuring code correctness remains a challenging problem even
as large language models (LLMs) become increasingly capable at code-related tasks. While LLM-based program repair systems can propose bug fixes using only a user's bug report, their effectiveness is fundamentally limited by their ability to perform fault localization (FL), a challenging problem for both humans and LLMs.
Existing FL approaches rely on executable test cases, require training on costly and often noisy line-level annotations, or demand resource-intensive LLMs.
In this paper, we present Bug Attention Probe (BAP), a method which learns state-of-the-art fault localization without any direct localization labels, outperforming traditional FL baselines and prompting of large-scale LLMs.
We evaluate our approach across a variety of code settings, including real-world Java bugs from the standard Defects4J dataset as well as seven other datasets which span a diverse set of bug types and languages. Averaged across all eight datasets,
BAP improves by 34.6% top-1 accuracy
compared to the strongest baseline and 93.4% over zero-shot prompting GPT-4o. BAP is also significantly more efficient than prompting, outperforming large open-weight models at a small fraction of the computational cost.
Submission Number: 214
Loading