\title{The Logical Reasoning Dilemma of LLMs: A Mapping Deficit in Representation}
Keywords: Large Language Models, Representation, Representation Acquisition, Mapping, Psychosemantic Modeling, Semantic Web, Logical Reasoning, Comparison, Categorization, Statistics
Abstract: \begin{abstract}
Currently, Large Language Models (LLMs) based on the Generative Pre-trained Transformer architecture have made breakthroughs in multilingual language processing tasks and perform at a similar level to humans in these tasks. However, their performance in dealing with tasks involving reasoning, especially logical reasoning, shows significant shortcomings. In this paper, we argue that logical reasoning competence requires a certain type of representation acquisition capability. Based on this, in order to assess whether LLMs have the potential to overcome the shortcomings of their logical reasoning competence in their subsequent development, this work compares the representation acquisition processes of humans and LLMs. This comparison reveals that although LLMs use similar representations to humans in processing multilingual language tasks, they do not have the same representation acquisition capability as humans. There is a fundamental difference in the process of representation acquisition between humans and LLMs, which can be expressed as a $mapping\ deficit$ in LLMs' representation acquisition process. This $mapping\ deficit$ explains why LLMs succeed in processing multilingual language tasks even though they do not have the same representation acquisition capability as humans, and why LLMs' logical reasoning competence shows significant shortcomings. This work aims to enhance the logical reasoning competence of LLMs in the future, and we believe that if the $mapping\ deficit$ in the representation acquisition of LLMs is solved, their logical reasoning competence will also improve.
\end{abstract}
Primary Area: causal reasoning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13917
Loading