Beyond Translation: A Decomposed Collaborative Reasoning Framework Harnessing LLMs and Symbolic Solvers
Keywords: Large Language Models, Logic solvers, First Order Logic, Language Model Reasoning
Abstract: The integration of large language models (LLMs) with symbolic solvers represents a promising direction for enhancing logical reasoning capabilities. However, existing methods often underutilize LLMs, treating them primarily as text-to-symbol converters, and remain prone to hallucinations and performance degradation in complex reasoning scenarios requiring multi-step intermediate reasoning. To address these limitations, we propose a novel framework that more deeply integrates LLMs with logical solvers. Our approach employs a two-step, fine-grained decomposition of the reasoning process. First, the LLM analyzes the problem and provides an outline for solving it. Second, the LLM decomposes the original problem into simpler sub-problems and generates formal conditions for each. This strategy fully leverages the LLM's strengths in contextual exploration and problem structuring while offloading rigorous deduction to the symbolic solver, thereby mitigating hallucinations. Extensive experimental evaluation demonstrates that our framework significantly outperforms state-of-the-art baselines. Our work provides a more effective paradigm for applying LLMs to complex reasoning tasks. Our codes will be available upon publication.
Submission Number: 20
Loading