Demonstration Selection for In-Context Learning via Reinforcement Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Diversity in demonstration selection is critical for enhancing model generalization by enabling broader coverage of structures and concepts. Constructing appropriate demonstration sets remains a key research challenge. This paper introduces the Relevance-Diversity Enhanced Selection (RDES), an innovative approach that leverages reinforcement learning (RL) frameworks to optimize the selection of diverse reference demonstrations for tasks amenable to in-context learning (ICL), particularly text classification and reasoning, in few-shot prompting scenarios. RDES employs frameworks like Q-learning and a PPO-based variant to dynamically identify demonstrations that maximize both diversity (quantified by label distribution) and relevance to the task objective. This strategy ensures a balanced representation of reference data, leading to improved accuracy and generalization. Through extensive experiments on multiple benchmark datasets, including diverse reasoning tasks, and involving 14 closed-source and open-source LLMs, we demonstrate that RDES significantly enhances performance compared to ten established baselines. Our evaluation includes analysis of performance across varying numbers of demonstrations on selected datasets. Furthermore, we investigate incorporating Chain-of-Thought (CoT) reasoning, which further boosts predictive performance. The results highlight the potential of RL for adaptive demonstration selection and addressing challenges in ICL.
Lay Summary: Large Language Models (LLMs) are powerful AI tools that can do many language-based tasks, such as text annotation and question answering. One way to make them perform well, especially on tasks with limited examples, is called In-Context Learning (ICL), where you give the model a few examples, or demonstrations, along with the new task. A big challenge in ICL is figuring out which examples to choose. Simply picking examples that are very similar to the new task isn't always the best approach; it's also really important to include diverse examples that show the model different types of situations or labels the model might encounter. This paper introduces a new method called Relevance-Diversity Enhanced Selection (RDES). RDES uses a type of AI learning, like teaching a computer to play a game, to select the best set of examples. This learning process helps RDES find examples that are not only relevant to the task but also cover a wide variety of cases, aiming to improve the model's ability to handle new, unseen examples. The method can even be combined with making the model show its step-by-step thinking, known as Chain-of-Thought (CoT). Our experiments show that RDES helps LLMs perform significantly better on various tasks compared to other methods for choosing examples, improving their accuracy and reasoning capabilities.
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, In-Context Learning, Reinforcement Learning, Chain-of-Thought
Submission Number: 1061
Loading