Enhancing Complex Symbolic Logical Reasoning of Large Language Models via Sparse Multi-Agent Debate
Keywords: Logical Reasoning, Symbolic AI, Multi-agent System, Large Language Models
Abstract: Large language models (LLMs) struggle with complex logical reasoning. Previous work has primarily explored single-agent methods, with their performance remains fundamentally limited by the capabilities of a single model. To our knowledge, this paper first introduce a multi-agent approach specifically to enhance the logical reasoning abilities of LLMs. Considering the respective strengths and weaknesses of symbolic and natural language reasoning, we propose a multi-agent framework where individual agents reason in both symbolic and natural languages and then engage in a debate. To ensure the accuracy of symbolic translation, we also leverage multiple agents to translate and debate in different symbolic languages. Due to the prohibitive communication and token costs of multi-turn interactions, we further propose an adaptive sparse communication strategy to ensure efficiency. Specifically, our method prunes unnecessary communication by assessing the agent confidence and information gains, allowing each agent to selectively maintain its memory with others' most valuable outputs to help generate answers. Extensive experiments demonstrate that not only our multi-agent debate framework outperforms previous methods in logical reasoning tasks, but also our sparse communication approach outperforms the fully-connected communication with 25% token costs reduced, improving both effectiveness and efficiency.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 14334
Loading