Abstract: The growing scale of large language models (LLMs) brings powerful reasoning capabilities.
Effective instructions can significantly improve LLMs' reasoning abilities. However, current prompt construction methods mainly focus on enhancing reasoning strategies or knowledge from an inductive perspective, overlooking the necessity of providing models with rules to follow from a deductive viewpoint.
This results in the model lacking reliable underlying logic, leading to incorrect answers.
This paper proposes an Induction to Deduction (I2D) framework to enable LLMs to automatically extract rules from tasks and apply these rules during the actual reasoning process.
In the framework, we combine hierarchical clustering and MCTS to extract potential rules in tasks as comprehensively as possible. Experimental results on complex tasks such as GSM8K and Big-Bench Hard
demonstrate a superior performance improvement of up to 15\% compared to few-shot settings and show the universal applicability of our method.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Large Language Models, Reasoning, Prompting
Languages Studied: English
Submission Number: 2132
Loading