Text and Patterns: For Effective Chain of Thought It Takes Two to TangoDownload PDF

22 Sept 2022 (modified: 25 Nov 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: in-context learning, few-shot prompting, chain of thought prompting, large-language models
Abstract: In the past decade, we witnessed dramatic gains in natural language processing and an unprecedented scaling of large language models. These developments have been accelerated by the advent of few-shot techniques such as chain of thought (CoT) prompting. Specifically, CoT pushes the performance of large language models in a few-shot setup by augmenting the prompts with intermediate steps. Despite impressive results across various tasks, the reasons behind their success have not been explored. This work uses counterfactual prompting to develop a deeper understanding of CoT-based few-shot prompting mechanisms in large language models. We first systematically identify and define the key components of a prompt: symbols, patterns, and text. Then, we devise and conduct an exhaustive set of deliberated experiments across four different tasks, by querying the model with counterfactual prompts where only one of these components is altered. Our experiments across three large language models (PaLM, GPT-3, and CODEX) reveal several surprising findings and brings into question the conventional wisdom around few-shot prompting. First, the presence of factual patterns in a prompt is practically immaterial to the success of CoT. Second, our results conclude that the primary role of intermediate steps may not be to facilitate learning "how" to solve a task. The intermediate steps are rather a beacon for the model to realize "what" symbols to replicate in the output to form a factual answer. As such, the patterns are merely a channel to "trick" the model into forming sentences that resemble correct answers. This pathway is facilitated by text, which imbues patterns with commonsense knowledge and meaning. Our empirical and qualitative analysis reveals that a symbiotic relationship between text and patterns explains the success of few-shot prompting: text helps extract commonsense from the question to help patterns, and patterns enforce task understanding and direct text generation. Such systematic understanding of CoT enables us to devise a concise chain of thought, dubbed as CCoT, where text and patterns are pruned by over 20%, only retaining their key roles. We achieve this reduction in the number of tokens while delivering on par or slightly higher solve task rate. We release datasets and anonymized code for reproducing our results at https://anonymous.4open.science/r/CoTTwoToTango-3106/.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
TL;DR: Text and patterns play a complementary role in the success of few-shot prompting.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/text-and-patterns-for-effective-chain-of/code)
11 Replies

Loading