REFACTOR: Learning to Extract Theorems from ProofsDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: theorem extraction, mathematical reasoning, theorem proving, reasoning
Abstract: Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show on a set of unseen proofs, REFACTOR is able to extract $19.6\%$ of the theorems that humans would use to write the proofs. When applying the model to the existing Metamath library, REFACTOR extracted $16$ new theorems. With newly extracted theorems, we show that the existing proofs in the MetaMath database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of $733.5$ times, and help to shorten the proof lengths. Lastly, we demonstrate that the prover trained on the new-theorem refactored dataset proves relatively $14$-$30\%$ more test theorems by frequently leveraging a diverse set of newly extracted theorems.
One-sentence Summary: We extract useful mathematical theorems using neural networks, evaluating on several downstream tasks to demonstrate their great utility.
Supplementary Material: zip
18 Replies

Loading