TL;DR: Flow of Reasoning (FoR) is a diversity-seeking finetuning method that enhances reasoning ability in large language models by using GFlowNets to discover diverse and accurate solutions through a Markovian flow on a directed acyclic graph.
Abstract: The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning improves reasoning quality but requires vast labeled data, while reward-maximizing reinforcement learning finds top-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample divergent paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across six challenging reasoning tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), GSM8k (math reasoning), and ProntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.
Lay Summary: Currently, AI models often focus on finding just one correct answer to a problem, such as solving a puzzle with only one recognized solution, overlooking other different approaches. We explored methods to teach Large Language Models (LLMs) to discover multiple solutions for a given problem, even when provided with limited training examples.
To address this, we developed a specialized training technique called "Flow of Reasoning" (FoR). This technique promotes diverse thinking, which enhances LLMs' abilities to assist with tasks demanding innovative, "out-of-the-box" solutions. This ultimately boosts their problem-solving capabilities. Our findings show that FoR significantly improves accuracy, solution diversity, and creativity when compared to existing methods.
Our work formulates a general framework for LLM multi-step reasoning, allowing for straightforward adaptation to various LLM-based reasoning tasks.
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, reasoning, diversity, multi-step reasoning
Submission Number: 7164
Loading