Pitfalls of Scale: Investigating the Inverse Task of Redefinition in Large Language Models

ACL ARR 2025 February Submission4315 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Inverse tasks can uncover potential reasoning gaps as Large Language Models (LLMs) scale up. In this work, we explore the redefinition task, in which we assign alternative values to well-known physical constants and units of measure, prompting LLMs to respond accordingly. Our findings show that not only does model performance degrade with scale, but its false confidence also rises. Moreover, while factors such as prompting strategies or response formatting are influential, they do not preclude LLMs from anchoring to memorized values.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: prompting, scaling, hardness of samples, reasoning, interpretability
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4315
Loading