IAO prompting: Forcing Large Language Models to Show their Reasoning through an Input-Action-Output Template
Abstract: The effectiveness of Large Language Models (LLMs) in tackling diverse reasoning problems is further improved by chain-of-thought (CoT) prompting, which makes the intermediate reasoning steps apparent.
In this work, we introduce IAO (Input-Action-Output) prompting, a straightforward template based prompting method that allows the complex reasoning process to be explicitly modelled in a structured manner.
IAO autonomously breaks down problems into a series of simpler reasoning steps and then solves them in sequence, each with explicit input information, action applied, and intermediate output. The solved steps inform the subsequent steps, facilitating progressive reasoning. This explicit structure not only improves reasoning performance but also interpretability and transparency.
Experiments across various reasoning tasks demonstrate IAO's strong zero-shot capabilities. Human evaluation validates the transparency and interpretability of IAO reasoning chains.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Chain of Thought Prompting, Large Language Models, In-context Learning, Few-shot Learning, Arithmetic Reasoning, Commonsense Reasoning, Symbolic Reasoning
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 1395
Loading