More Samples or More Prompt Inputs? Exploring Effective Few-Shot In-Context Learning for LLMs with In-Context Sampling
Abstract: While most existing works on LLM prompting techniques focus only on how to select a better set of data samples inside one single prompt input (In-Context Learning or ICL), why can not we design and leverage multiple prompt inputs together to further improve the LLM performance? In this work, we propose In-Context Sampling (ICS), a low-resource LLM prompting technique to produce confident predictions by optimizing the construction of multiple ICL prompt inputs. Extensive experiments with two open-source LLMs (FlanT5-XL and Mistral-7B) on four NLI datasets (e-SNLI, Multi-NLI, ANLI, and Contract-NLI) illustrate that ICS can consistently enhance LLM's prediction performance. An in-depth evaluation with three proposed data similarity-based ICS strategies suggests that these strategies can further elevate LLM's performance, which sheds light on a new yet promising future research direction.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading