Keywords: Multi-step reasoning, LLMs application, High-precision code generation
Abstract: Large language models (LLMs) like ChatGPT and GPT-4 exhibit impressive capabilities in a wide range of generative tasks. However, their performance is often hindered by limitations in accessing and leveraging long-term memories, leading to specific vulnerabilities and biases, especially in prolonged interactions. This paper introduces ChatLogic, an innovative framework that augments LLMs with logical reasoning. In ChatLogic, LLMs play a central role, acting as the controller and engaging in every phase of the system's operation. We present a novel method for translating logical questions into symbols integrated with a reasoning engine. This approach harnesses the contextual understanding and mimicking skills of LLMs, employing symbolic memory to enhance multi-step deductive reasoning capabilities. Our findings reveal that the ChatLogic framework markedly improves the multi-step reasoning capabilities of native LLMs. The source code and data are available at https://github.com/Strong-AI-Lab/ChatLogic.
Submission Number: 15
Loading