A Survey on Enhancing Large Language Models with Symbolic Reasoning

ACL ARR 2025 February Submission3349 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reasoning is one of the fundamental human abilities, central to problem-solving, decision-making, and planning. With the development of large language models (LLMs), significant attention has been given to enhancing and understanding their reasoning capabilities. Most existing works attempt to directly apply LLMs to natural-language-based reasoning. However, due to the inherent semantic ambiguity and grammatical complexity of natural language, LLMs often struggle with complex problems, leading to challenges such as hallucinations and inconsistent reasoning. Therefore, methods for constructing formal language representations, most of which are symbolic languages, have emerged to obtain more reliable solutions recently. In this work, we focus on symbolic reasoning in LLMs and provide a comprehensive review of the related research. This includes examining the applications of symbolic reasoning, types of symbolic languages used, techniques for enhancing LLMs' symbolic reasoning abilities, and benchmarks employed to evaluate their performance. Our goal is to offer a thorough review of symbolic reasoning in LLMs, highlighting key findings and challenges while providing a reference for future research in this area.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Large Language Models; Symbolic Reasoning
Contribution Types: Surveys
Languages Studied: English
Submission Number: 3349
Loading