Explainable AI for Mathematics: Proofs as Code with Knowledge Graph and Domain Ontology Support

Published: 15 Mar 2026, Last Modified: 15 Mar 20262026 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: neural theorem proving, knowledge graph, retrieval-augmented generation, explainable AI, formal mathematics, Lean 4, Mathlib, ontology, dependency graph, miniF2F
TL;DR: An ontology-typed knowledge graph built from Lean 4 Mathlib nearly triples neural theorem-proving success on hard problems via training-free, explainable graph retrieval, with deterministic pattern-based entry points outperforming LLM-generated ones.
Abstract: Neural theorem-proving systems can generate formal proofs, but they often behave as a ”black box”. It is unclear which pieces of mathematical knowledge led to success or failure. We present SciLibRU, an infrastructure that materializes Lean 4’s Mathlib as an ontology-typed knowledge graph (tens of millions of RDF facts) and links mathematical entities to multimodal representations (code, natural-language text, formulae, and related artifacts) under a shared identifier space. Building on this graph, we enable transparent proof support. Using candidate hints that are retrieved via graph navigation and/or semantic search, and each suggestion is explicitly traceable to concrete Mathlib dependency edges. That makes the evidence chain inspectable by humans. Experiments on miniF2F-Test show that graph-based augmentation substantially improves success on harder problems while remaining training-free and composable with any base prover.
Submission Number: 32
Loading