Attention-likelihood relationship in TransformersDownload PDF

01 Mar 2023 (modified: 13 Apr 2025)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Transformers, self-attention, token likelihood
TL;DR: Token likelihood correlates with attention values in transformer-based language models.
Abstract: We analyze how large language models (LLMs) represent out-of-context words, investigating their reliance on the given context to capture their semantics. Our likelihood-guided text perturbations reveal a correlation between token likelihood and attention values in transformer-based language models. Extensive experiments reveal that unexpected tokens cause the model to attend less to the information coming from themselves to compute their representations, particularly at higher layers. These findings have valuable implications for assessing the robustness of LLMs in real-world scenarios. Fully reproducible codebase at [url].
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/attention-likelihood-relationship-in/code)
6 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview