Efficient First-Order Logic-Based Method for Enhancing Logical Reasoning Capabilities of LLMs

Published: 23 Sept 2025, Last Modified: 07 Dec 2025FoRLM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Logical reasoning, Large language models, multi-agent debate
Abstract: Large language models (LLMs) struggle with complex logical reasoning. Previous work has primarily explored single-agent methods, with their performance remaining fundamentally limited by the capabilities of a single model. To our knowledge, this paper is the first to introduce a multi-agent approach specifically to enhance the logical reasoning abilities of LLMs. Considering the prohibitive communication and token costs of multi-turn interactions, we propose an adaptive sparse communication strategy to ensure efficiency. Specifically, our method prunes unnecessary communication by assessing agent confidence and information gain, allowing each agent to selectively update its memory with other agents' most valuable outputs to help generate answers. Extensive experiments demonstrate that our sparse communication approach outperforms fully connected communication while reducing token costs by 25%, improving both effectiveness and efficiency.
Submission Number: 81
Loading