Large Language Models Exhibit Limited Reasoning Ability on Coding Questions

ACL ARR 2025 July Submission913 Authors

29 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Claims that large language models (LLMs) have complex reasoning ability have stirred broad interests, and controversies, of academics and non-academics alike. A popular basis for such claims comes from LLMs' ability to solve coding questions, which involves understanding the question statement and providing code that solves the question. Although such abilities are remarkable feats worth praising, we argue that they come from memorization rather than reasoning. We first show that LLMs' question-solving ability degrades with increased recency of the question, likely due to the reduced amount of training data for more recent questions. Additionally, we show that an LLM often fails to solve the question when presented with reworded but equivalent question statements, further suggesting their limited reasoning ability.
Paper Type: Short
Research Area: Question Answering
Research Area Keywords: logical reasoning, reasoning, generalization
Contribution Types: Model analysis & interpretability, Position papers
Languages Studied: English
Previous URL: https://openreview.net/forum?id=t2UWNdsmD0
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: No, I want the same area chair from our previous submission (subject to their availability).
Reassignment Request Reviewers: No, I want the same set of reviewers from our previous submission (subject to their availability)
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: Ethics Statement
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 2
B2 Discuss The License For Artifacts: No
B2 Elaboration: Provided in the citation
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Section 2
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: N/A
B6 Statistics For Data: Yes
B6 Elaboration: Section 2
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 2
C2 Experimental Setup And Hyperparameters: N/A
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 3
C4 Parameters For Packages: Yes
C4 Elaboration: Section 2
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 913
Loading