Exploring Demonstration Ensembling for In-context LearningDownload PDF

Published: 04 Mar 2023, Last Modified: 14 Apr 2024ME-FoMo 2023 SpotlightReaders: Everyone
Keywords: in-context learning, few-shot learning, prompting
TL;DR: A study on example ensembling for in-context learning with langauge models and a comparison to the standard example concatenation approach.
Abstract: In-context learning (ICL) operates by showing language models (LMs) examples of input-output pairs for desired tasks, i.e., demonstrations. The standard approach for ICL is to prompt the LM with concatenated demonstrations followed by the test input. This approach suffers from some issues. First, concatenation offers almost no control over the contribution of each demo to the model prediction. This can be sub-optimal when some demonstrations are not very relevant to the test example. Second, due to the input length limit of transformer models, it can be infeasible to fit many examples into the context, especially when dealing with long-input tasks. In this work, we explore Demonstration Ensembling (DENSE) as an alternative to simple concatenation. DENSE predicts outputs using subsets (i.e., buckets) of the demonstrations and then combines the output probabilities resulting from each subset to produce the final prediction. We study different ensembling methods using GPT-j and experiment on 7 different language tasks. Our experiments show max ensembling to outperform concatenation by an average of 3.8 points.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2308.08780/code)
0 Replies

Loading