Structured Abductive-Deductive-Inductive Reasoning for LLMs via Algebraic Invariants
Track: long paper (up to 10 pages)
Keywords: logical reasoning, abduction, deduction, induction, chain-of-thought, algebraic invariants, weakest link, possibilistic logic, property-based testing, reasoning verification
TL;DR: A symbolic reasoning scaffold that separates abduction, deduction, and induction for LLMs, with five algebraic invariants guaranteeing logical consistency across reasoning chains.
Abstract: Large language models exhibit systematic limitations in structured logical reasoning: they conflate hypothesis generation with verification, cannot distinguish conjecture from validated knowledge, and allow weak reasoning steps to propagate unchecked through inference chains. We present a symbolic reasoning scaffold that operationalizes Peirce's tripartite inference---abduction, deduction, and induction---as an explicit protocol for LLM-assisted reasoning. The framework enforces logical consistency through five algebraic invariants (the Gamma Quintet), the strongest of which---the Weakest Link bound---ensures that no conclusion in a reasoning chain can exceed the reliability of its least-supported premise. This principle, independently grounded as weakest link resolution in possibilistic logic and empirically validated for chain-of-thought reasoning, prevents logical inconsistencies from accumulating across multi-step inference. We verify all invariants through a property-based testing suite of 42 properties and 10 fuzz tests over $10^5$+ generated cases, providing a benchmark for evaluating logical consistency preservation in reasoning systems.
Presenter: ~Sankalp_Gilda1
Format: Maybe: the presenting author will attend in person, contingent on other factors that still need to be determined (e.g., visa, funding).
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Funding: Yes, the presenting author of this submission falls under ICLR’s funding aims, and funding would significantly impact their ability to attend the workshop in person.
Submission Number: 148
Loading