Improving LLM Reasoning via Symbolic Inference over Logic Graphs

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Logical reasoning, Large language models, Neuro-symbolic methods, Logic graphs, Symbolic inference
Abstract: Large language models (LLMs) exhibit strong language understanding but remain limited in logical reasoning, particularly in multi-hop inference involving complex contextual dependencies. We propose Graph-based Planned Reasoning (GPR), a neuro-symbolic framework that enhances LLM reasoning by organizing the process into structured stages. GPR builds a logic graph to capture fine-grained symbolic relations from natural language context, then leverages Planner to generate a goal-directed reasoning strategy. A dedicated Reasoner conducts step-wise symbolic inference along this plan, while Critic modules act as internal validators, checking and revising the logic graph and the final inference when necessary. This design enables GPR to perform faithful, interpretable reasoning while maintaining robustness against irrelevant or misleading information. Experiments across multiple logical reasoning benchmarks demonstrate that GPR consistently outperforms existing reasoning baselines and remains robust under noisy conditions.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 24179
Loading