Attend or Perish: Benchmarking Attention in Algorithmic Reasoning

ACL ARR 2025 February Submission7842 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Can transformers learn to perform algorithmic tasks reliably across previously unseen inputs? While pre-trained language models show solid accuracy on benchmarks incorporating algorithmic reasoning, assessing the reliability of these results necessitates an ability to cleanse models' functional capabilities from memorization. In this paper, we propose an algorithmic benchmark comprising six tasks of infinite input domains where we can also disentangle and trace the correct, robust algorithm necessary for the task. This allows us to assess (I) models' ability to extrapolate to unseen types of inputs, including new lengths, value ranges or input domains, but also (ii) to assess the robustness of the functional mechanism in recent models through the lens of their attention maps. We make the implementation of all our tasks and interoperability methods publicly available.\footnote{See the supplementary materials.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: extrapolation, reasoning, algorithmic reasoning, evaluation, interpretability
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 7842
Loading