TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement LearningDownload PDF

Published: 09 Nov 2021, Last Modified: 25 Nov 2024NeurIPS 2021 PosterReaders: Everyone
Keywords: interactive theorem proving, reinforcement learning, HOL4
TL;DR: Applying deep reinforcement learning to interactive theorem proving. The framwork supports the learning of both proof search strategies and tactic prediction, without using human examples.
Abstract: We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning. The proposed framework is able to learn proof search strategies as well as tactic and arguments prediction in an end-to-end manner. We formulate the process of ITP as a Markov decision process (MDP) in which each state represents a set of potential derivation paths. This structure allows us to introduce a novel backtracking mechanism which enables the agent to efficiently discard (predicted) dead-end derivations and restart the derivation from promising alternatives. We implement the framework in the HOL theorem prover. Experimental results show that the framework using learned search strategies outperforms existing automated theorem provers (i.e., hammers) available in HOL when evaluated on unseen problems. We further elaborate the role of key components of the framework using ablation studies.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/tacticzero-learning-to-prove-theorems-from/code)
9 Replies

Loading