Neural Theorem Proving: Generating and Structuring Proofs for Formal Verification

Published: 20 Apr 2025, Last Modified: 29 Aug 2025NeSy 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Nerual Theorem Proving, Large Language Models, Formal Methods, Reinforcement Learning, Automated Theorem Proving
TL;DR: We create a framework to generate Isabelle proofs to verify the properties of mathematical statements and code snippets using fine-tuned LLMs and methods that leverage the power of off-the-shelf automated theorem provers.
Track: Neurosymbolic Generative Models
Abstract: Formally verifying properties of software code has been a highly desirable task, especially with the emergence of LLM-generated code. In the same vein, they provide an interesting avenue for the exploration of formal verification and mechanistic interpretability. Since the introduction of code-specific models, despite their successes in generating code in Lean4 and Isabelle, the task of generalized theorem proving still remains far from being fully solved and will be a benchmark for reasoning capability in LLMs. In this work, we introduce a framework that generates whole proofs in a formal language to be used within systems that utilize the power of built-in tactics and off-the-shelf automated theorem provers. Our framework includes 3 components: generating natural language statements of the code to be verified, an LLM that generates formal proofs for the given statement, and a module employing heuristics for building the final proof. To train the LLM, we employ a 2-stage fine-tuning process, where we first use SFT-based training to enable the model to generate syntactically correct Isabelle code and then RL-based training that encourages the model to generate proofs verified by a theorem prover. We validate our framework using the miniF2F-test benchmark and the Isabelle proof assistant and design a use case to verify the correctness of the AWS S3 bucket access policy code. We also curate a dataset based on the FVEL\textsubscript{\textnormal{ER}} dataset for future training tasks.
Paper Type: Long Paper
Software: https://github.com/kings-crown/ProofSeek
Submission Number: 46
Loading