What else does attention need: Neurosymbolic approaches to general logical reasoning in LLMs?

TMLR Paper3585 Authors

29 Oct 2024 (modified: 31 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: General logical reasoning is perhaps the most impenetrable challenge for large language models (LLMs). We define general logical reasoning as the ability to reason deductively on domain-agnostic tasks. Current LLMs fail to reason deterministically and are not interpretable. As such, there has been a recent surge in interest in neurosymbolic AI, a research area that attempts to incorporate logic into neural networks. We first identify two main neurosymbolic approaches to improving logical reasoning: (i) the integrative approach comprising models where symbolic reasoning is contained within the neural network, and (ii) the hybrid approach comprising models where a symbolic solver, separate from the neural network, performs symbolic reasoning. Both contain AI systems with promising results on domain-specific logical reasoning benchmarks. However, their performance on domain-agnostic benchmarks is understudied. To the best of our knowledge, there has not been a comparison of the contrasting approaches that answers the following question: Which approach is more promising for developing general logical reasoning without sacrificing the capabilities of existing LLMs? To analyze their potential, the following best-in-class domain-agnostic models are introduced: Logic Neural Network (LNN), which uses the integrative approach, and LLM-Symbolic Solver (LLM-SS), which uses the hybrid approach. Compared to the current state-of-the-art neurosymbolic models, LNN achieves faster convergence and higher accuracy while LLM-SS delivers a lower error rate. Using both models as case studies and representatives of each approach, our analysis demonstrates that the hybrid approach is more promising for developing general logical reasoning because (i) its reasoning chain is more interpretable than the integrative approach, and (ii) it retains the capabilities and advantages of existing LLMs. To support future works using the hybrid approach to improve general logical reasoning, we propose a generalizable neurosymbolic framework based on LLM-SS that is modular by design, model-agnostic, domain-agnostic, and requires little to no human input.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yingnian_Wu1
Submission Number: 3585
Loading