Keywords: neural theorem proving
Abstract: Learning to do complex reasoning is the central objective of artificial intelligence. Autoregressive language models have shown promise in generating intermediate steps for problem solving; however, complex reasoning tasks such as theorem proving still present challenges due to the vast search spaces. Classical works have considered reasoning by searching, e.g., expanding the reasoning space with tree search to explore intermediate steps; and reasoning by decomposing, i.e., breaking down the problem into higher-level thoughts that prompt lower-level actions. In this work, we develop Reasoning in Reasoning (RiR), a hierarchical framework that formally unifies decomposing and search by a planner-actor game. Using neural theorem proving as a representative task, our approach breaks down complex theorem proving problems into achievable sub-goals for abstraction over formal proofsteps, giving models: (i) improved generalizability for reasoning step generation, (ii) a more compact and informative search space for reasoning trajectories, and (iii) an efficient mechanism for learning to plan. We empirically show that RiR achieves concrete performance gain on popular theorem proving datasets including LeanDojo and miniF2F while being highly efficient (e.g., RiR is nearly 3x faster over the existing state-of-the-art baseline on miniF2F). We further present information-theoretic conjectures on the principles driving RiR's effectiveness.
Concurrent Submissions: N/A
Submission Number: 38
Loading