Keywords: AutoFormalization, Lean4, Dataset, LLM, AI4Math
TL;DR: This article proposes FMC, a Lean4 formal language dataset of mathematical competition-level difficulty, and evaluates SoTA provers on the proposed dataset.
Abstract: Efficient and accurate autoformalization methods, leveraging large-scale databases of natural language mathematical problems to construct formal language datasets, are key to advancing formal mathematical reasoning. In this paper, we propose an autoformalization pipeline based on large language models with error feedback, achieving a fully automatic and training-free formalization approach. Using this pipeline, we establish an Olympiad-level dataset aligning natural language problems with Lean formalizations. The dataset contains $3,922$ mathematical problems in natural language and $9787$ in Lean, of which $64.46$% received at least good quality ratings and above, making it suitable as a benchmark for automated theorem provers. Additionally, we investigate the formalization and reasoning capabilities of various general large language models and experimentally demonstrate that few-shot learning, error feedback, and increasing sampling numbers positively impact the autoformalization performance.
Submission Number: 165
Loading