A Survey on Enhancing Large Language Models with Symbolic Reasoning

ACL ARR 2025 May Submission1877 Authors

18 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reasoning is one of the fundamental human abilities, central to problem-solving, decision-making, and planning. With the development of large language models (LLMs), significant attention has been paid to enhancing and understanding their reasoning capabilities. Most existing works attempt to directly use LLMs for natural-language-based reasoning. However, due to the inherent semantic ambiguity and complexity of natural language, LLMs often struggle with complex problems, leading to challenges such as hallucinations and inconsistent reasoning. Therefore, techniques for constructing formal language representations, most of which are symbolic languages, have emerged. In this work, we focus on symbolic reasoning in LLMs and provide a comprehensive review of the related research. This includes the types of symbolic languages used, different symbolic reasoning tasks and related benchmarks, and typical techniques for enhancing LLMs' symbolic reasoning abilities. Our goal is to offer a thorough review of symbolic reasoning in LLMs, highlighting key findings and challenges while providing a reference for future research in this area.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: Large Language Models; Symbolic Reasoning
Contribution Types: Surveys
Languages Studied: English
Submission Number: 1877
Loading