TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement LearningDownload PDF

21 May 2021, 20:46 (edited 26 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: interactive theorem proving, reinforcement learning, HOL4
  • TL;DR: Applying deep reinforcement learning to interactive theorem proving. The framwork supports the learning of both proof search strategies and tactic prediction, without using human examples.
  • Abstract: We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning. The proposed framework is able to learn proof search strategies as well as tactic and arguments prediction in an end-to-end manner. We formulate the process of ITP as a Markov decision process (MDP) in which each state represents a set of potential derivation paths. This structure allows us to introduce a novel backtracking mechanism which enables the agent to efficiently discard (predicted) dead-end derivations and restart the derivation from promising alternatives. We implement the framework in the HOL theorem prover. Experimental results show that the framework using learned search strategies outperforms existing automated theorem provers (i.e., hammers) available in HOL when evaluated on unseen problems. We further elaborate the role of key components of the framework using ablation studies.
  • Supplementary Material: zip
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
9 Replies

Loading