An Evaluation Benchmark for Autoformalization in Lean4

Published: 19 Mar 2024, Last Modified: 19 Mar 2024Tiny Papers @ ICLR 2024 PresentEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, LLM, autoformalization, theorem proving, dataset
TL;DR: We created an evaluation benchmark for autoformalization in Lean4 and tested S.O.T.A models (GPT-3.5, GPT-4, Gemini Pro) on it.
Abstract: In the advancing field of computational mathematics, Large Language Models (LLMs) hold the potential to revolutionize autoformalization, a process crucial across various disciplines. The introduction of Lean4, a mathematical programming language, presents an unprecedented opportunity to rigorously assess the autoformalization capabilities of LLMs. This paper introduces a novel evaluation benchmark designed for Lean4, applying it to test the abilities of state-of-the-art LLMs, including GPT-3.5, GPT-4, and Gemini Pro. Our comprehensive analysis reveals that, despite recent advancements, these LLMs still exhibit limitations in autoformalization, particularly in more complex areas of mathematics. These findings underscore the need for further development in LLMs to fully harness their potential in scientific research and development. This study not only benchmarks current LLM capabilities but also sets the stage for future enhancements in the field of autoformalization.
Submission Number: 250
Loading