Keywords: Formal Verification, Automated Theorem Proving, Neuro-Symbolic AI, Reinforcement Learning, Proof Synthesis, Proof Correction, Large Language Models, Verifier-in-the-Loop, Symbolic Reasoning, Logical Consistency, Curriculum Learning, Program Synthesis, AI Safety, Mathematical Reasoning
TL;DR: ProofNet++ is a neuro-symbolic system that combines large language models with formal verification and self-correction to produce reliable, machine-checkable mathematical proofs.
Abstract: We propose ProofNet++, a neuro-symbolic framework that enhances automated theorem proving by combining large language models (LLMs) with formal proof verification and self-correction mechanisms. Current LLM-based systems suffer from hallucinated logical steps and unverifiable reasoning. ProofNet++ mitigates these limitations by integrating symbolic proof tree supervision, a reinforcement learning loop using verifiers as reward functions, and an iterative self-correction module. Our experiments on miniF2F, Lean's mathlib, and HOL Light show that ProofNet++ significantly improves proof accuracy, correctness, and formal verifiability over prior models. We provide theoretical analysis of the convergence and stability of the verifier-guided RL framework and release our datasets and codebase for future research.
Submission Number: 11
Loading