Keywords: Logical Reasoning, Large Language Models, Knowledge Compilation
TL;DR: Our work introduces a mechanism that performs A Priori Knowledge Compilation—proactively deriving foundational facts and composing powerful new rules—to enable robust Shortcut Reasoning and cure the Transitivity Curse in LLMs.
Abstract: While large language models (LLMs) have shown remarkable reasoning abilities, they often fail at multi-hop logical reasoning tasks that require chaining inferences, struggling to deduce transitive relations like $(P \to R)$ from $(P \to Q) \land (Q \to R)$. This fundamental limitation, which we term the \textbf{``Transitivity Curse''}, leads to brittle reasoning chains and significant error propagation. Existing reasoning frameworks, often based on Chain-of-Thought, attempt to traverse these long paths sequentially, a process that is both inefficient and prone to failure as complexity increases. To cure this curse, we introduce a novel mechanism designed to be integrated into existing logical reasoners. Our mechanism shifts the paradigm from passively traversing reasoning chains to proactively compiling them through a process we call \textbf{A Priori Knowledge Compilation (APKC)}. This process unfolds in two critical phases. First, it employs a goal-oriented backward analysis to identify a focused, relevant subgraph of the knowledge base. Subsequently, within this constrained boundary, our mechanism performs a systematic forward-chaining process to synthesize new knowledge in the form of both foundational \textbf{derived facts} and powerful \textbf{composite rules}. This compiled knowledge collapses multi-step inferences into fewer, more robust steps.By allowing a host framework to leverage this compiled knowledge, our mechanism enables a more direct form of \textbf{Shortcut Reasoning}, drastically reducing the required depth of runtime inference. Experiments show that when integrated into state-of-the-art reasoning frameworks, our mechanism consistently and significantly boosts their performance on several logical reasoning benchmarks. Our findings demonstrate that APKC, as a plug-in mechanism, is a critical component for making existing LLM-based reasoners more robust, efficient, and trustworthy.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 25006
Loading