More Details, Please: Improving Autoformalization with More Detailed Proofs

Published: 13 Jun 2024, Last Modified: 28 Jun 2024ICML 2024 Workshop AI4MATH PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: autoformalization, formalization, automated theorem proving, large language models, proof verification, mathematical reasoning
TL;DR: We propose SPADeR, an approach to autoformalization that uses language models to infer and explicitly incorporate implicit details from informal proofs
Abstract: The formalization of mathematical theorems and their proofs is a time-consuming and tedious process which, despite recent advances in the reasoning capabilities of AI systems, remains a challenging task for computers. Existing attempts to automate the process with language models struggle with the difference in level of detail between formal and informal proofs. Successful autoformalization requires models to understand and be able to explain the nuances of logical arguments, a critical aspect of reasoning that is often overlooked in existing research. In this work, we introduce Sketch, Prove, Add Detail & Repeat (SPADeR), an approach that enhances proof autoformalizers by using language models to infer and explicitly incorporate implicit details from informal proofs. With the same number of autoformalization attempts, our method increases the percentage of successfully formalized problems in the miniF2F test dataset from 34.8% to 38.1%.
Submission Number: 24
Loading