Abstract: Large Language Models (LLMs) have demonstrated remarkable abilities across various language tasks, but solving complex reasoning problems remains a significant challenge. While existing methods, such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT), enhance reasoning by decomposing problems or structuring prompts, they typically perform a single pass of reasoning and may fail to revisit flawed paths, compromising accuracy. To address this limitation, we propose a novel reasoning framework called Forest-of-Thought (FoT), which integrates multiple reasoning trees to leverage collective decision-making for solving complex logical problems. FoT employs sparse activation strategies to select the most relevant reasoning paths, improving both efficiency and accuracy. Additionally, we introduce a dynamic self-correction strategy that enables real-time error correction, along with consensus-guided decision-making strategies to optimize both correctness and computational resources. Experimental results demonstrate that the FoT framework, combined with these strategies, significantly enhances the reasoning capabilities of LLMs, enabling them to solve complex tasks with greater precision and efficiency.
Lay Summary: Large Language Models (LLMs) have achieved impressive performance across a range of natural language tasks but still struggle with complex reasoning challenges. Methods like Chain-of-Thought (CoT) and Tree-of-Thought (ToT) enhance reasoning by breaking down problems or structuring the thinking process, yet they typically rely on a single pass of reasoning. This limitation prevents them from revisiting and correcting flawed reasoning paths, leading to reduced accuracy in difficult tasks.
To overcome this, we introduce Forest-of-Thought (FoT), a novel framework that integrates multiple reasoning trees operating in parallel. FoT employs sparse activation to select the most relevant reasoning paths, improving computational efficiency. It also incorporates a dynamic self-correction mechanism that allows real-time error revision and a consensus-guided strategy to determine the final output, making the reasoning process more robust and adaptive.
Experimental results show that FoT significantly enhances the reasoning performance of LLMs, allowing them to solve complex logical problems with improved precision and efficiency. This work advances the capabilities of language models in handling high-level cognitive tasks and offers a scalable solution for integrating structured, self-correcting reasoning into future AI systems.
Link To Code: https://github.com/iamhankai/Forest-of-Thought
Primary Area: Deep Learning->Large Language Models
Keywords: forest of thought; fot; nlp; llm
Submission Number: 445
Loading