Proof Flow: Preliminary Study on Generative Flow Network Language Model Tuning for Formal Reasoning

Published: 10 Oct 2024, Last Modified: 25 Oct 2024Sys2-Reasoning PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: GFlowNet, Neural Theorem Proving, Lean, Reinforcement Learning, LLM, Reasoning
TL;DR: We used GFlowNet fine-tuning to improve LLM reasoning in Lean. Please note: this is a in-progress work as invited by the System 2 Workshop call for papers criteria.
Abstract: Reasoning is a fundamental substrate for solving novel and complex problems. Deliberate efforts in learning and developing frameworks around System 2 reasoning have made great strides, yet problems of sufficient complexity remain largely out of reach for open models. To address this gap, we examine the potential of Generative Flow Networks as a fine-tuning method for LLMs to unlock advanced reasoning capabilities. In this paper, we present a proof of concept in the domain of formal reasoning, specifically in the Neural Theorem Proving (NTP) setting, where proofs specified in a formal language such as Lean can be deterministically and objectively verified. Unlike classical reward-maximization reinforcement learning, which frequently over-exploits high-reward actions and fails to effectively explore the state space, GFlowNets have emerged as a promising approach for sampling compositional objects, improving generalization, and enabling models to maintain diverse hypotheses. Our early results demonstrate GFlowNet fine-tuning's potential for enhancing model performance in a search setting, which is especially relevant given the paradigm shift towards inference time compute scaling and "thinking slowly."
Submission Number: 75
Loading