First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning

ACL ARR 2024 June Submission1988 Authors

15 Jun 2024 (modified: 06 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multi-step reasoning is widely adopted in the community to explore the better performance of language models (LMs). We report on the systematic strategy that LMs use in this process. Our controlled experiments reveal that LMs rely more heavily on heuristics, such as lexical overlap, in the earlier stages of reasoning when more steps are required to reach an answer. Conversely, as LMs progress closer to the final answer, their reliance on heuristics decreases. This suggests that LMs track only a limited number of future steps and dynamically combine heuristic strategies with logical ones in tasks involving multi-step reasoning.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: analysis, few-shot learning, multihop QA, reasoning
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1988
Loading