Keywords: formal dependency retrieval, theorem prover, large language models, formal autoformalization
Abstract: Statement autoformalization, a crucial first step in formal verification, aims to transform informal descriptions of math problems into machine-verifiable formal representations but remains a significant challenge.
The core difficulty lies in the fact that existing language models hallucinate formal dependencies, including missing or incorrect definitions, lemmas, and theorems.
Current dependency retrieval approaches exhibit poor precision and recall, and lack the scalability to leverage ever-growing public datasets. To bridge this gap, we propose a novel retrieval-augmented framework based on Direct Dependency Retrieval (DDR).
DDR directly generates candidate formal dependencies from natural-language mathematical descriptions and verifies their existence in the formal library via an efficient Suffix Array Check (SAC).
Built on a SAC-constructed dependency retrieval dataset of over 500,000 samples, a high-precision DDR model is fine-tuned and shown to significantly outperform state-of-the-art methods in both retrieval precision and recall, leading to superior advantage in the autoformalization tasks.
SAC also contributes in assessing formalization difficulty and enabling explicit quantification of the hallucination in In-Context Learning (ICL).
Paper Type: Long
Research Area: Information Extraction and Retrieval
Research Area Keywords: Resources and Evaluation,Language Modeling,Information Extraction
Contribution Types: Model analysis & interpretability, Reproduction study, Data resources, Data analysis
Languages Studied: English, formal theorem prover
Submission Number: 2214
Loading