Abstract: Large language models have demonstrated remarkable capabilities in natural language processing tasks requiring multi-step logical reasoning capabilities, such as automated theorem proving. However, challenges persist within theorem proving, such as the identification of key mathematical concepts, understanding their interrelationships, and formalizing proofs correctly within natural language. We present KG-prover, a novel framework that leverages knowledge graphs mined from reputable mathematical texts to augment general-purpose LLMs to construct and formalize mathematical proofs -- reasoning through the problem completely in natural language before outputting into a formal proof. We also study the effects of scaling graph-based, test-time compute using KG-Prover, demonstrating significant performance improvements over baselines across multiple datasets. General-purpose models improve up to 21\% on minif2f when combined with KG-Prover, with consistent improvements ranging from 2-11\% on the ProofNet, miniF2F-test, and MUSTARD datasets. This work provides a promising approach augmenting natural language proof reasoning with knowledge graphs without the need for additional finetuning.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Large Language Models (LLMs), mathematical reasoning, knowledge graphs, proof formalization, natural language understanding, logical reasoning, theorem proving, AI-augmented mathematics
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 8056
Loading