Keywords: Automated theorem generation, minimal unsatisfiability, large language models, hybrid reasoning.
TL;DR: We present a neuro-symbolic framework combining $\Delta_{1}$ and LLMs to generate and explain minimal theorems for interpretable, auditable reasoning.
Abstract: Neuro-symbolic reasoning increasingly demands frameworks that unite the formal rigor of logic with the interpretability of large language models (LLMs). We introduce an end-to-end explainability-by-construction pipeline integrating the Automated Theorem Generator $\Delta_{1}$ based on the full triangular standard contradiction (FTSC) with LLMs. $\Delta_{1}$ deterministically constructs minimal unsatisfiable clause sets and complete theorems in polynomial time, ensuring both soundness and minimality by construction. The LLM layer verbalizes each theorem and proof trace into coherent natural-language explanations and actionable insights. Empirical studies across health care, compliance, and regulatory domains show that $\Delta_{1}$ + LLM enables interpretable, auditable, and domain-aligned reasoning. This work advances the convergence of logic, language, and learning, positioning constructive theorem generation as a principled foundation for neuro-symbolic explainable AI.
Submission Number: 15
Loading