EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning

Published: 28 Oct 2023, Last Modified: 13 Nov 2023MATH-AI 23 PosterEveryoneRevisionsBibTeX
Keywords: In-context learning, prompting, large language models, zero-shot learning, few-shot learning
TL;DR: A prompting approach to improve the incontext-learning performance of language models on arithmetic, logical reasoning and reading comprehension tasks.
Abstract: Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques, such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is adapted for both zero-shot and few-shot in-context learning with standard and chain-of-thought prompting. Experimental results show that EchoPrompt yields substantial improvements across all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. We recommend incorporating EchoPrompt into various baseline prompting strategies to achieve performance boosts.
Submission Number: 22
Loading