Pivot-ICL: Adaptive Exemplar Selection for In-Context Learning

ICLR 2026 Conference Submission14699 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: in-context learning, reasoning, large language model, graph-based methods, few-shot learning
TL;DR: We propose Pivot-ICL to conduct different treatment to different questions based on how likely they are covered by current exemplars and show improved performance on various reasoning tasks..
Abstract: In-context learning (ICL) enables LLMs to better complete complex tasks, but the key to success relies on selecting the right exemplars. Prior work often selects exemplars based on similarity to the test input, but this can fail when no close match exists—especially for challenging or out-of-distribution examples. In such cases, generic exemplars that broadly represent the task may be more helpful. This raises two key questions: (1) Which exemplars best represent the overall task distribution? (2) Which test examples are too far from available exemplars? We propose Pivot-ICL, a method that adaptively selects either dynamic, input-specific exemplars or static, generic exemplars. We model this decision using a bipartite graph between test examples and candidate exemplars, applying well-established graph mining algorithms such as Hyperlink-Induced Topic Search (HITS) to enable weighted, iterative, and bidirectional scoring. This graph-based voting identifies both representative exemplars and which test inputs require them. Experiments show Pivot-ICL consistently improves performance across tasks, achieving a +8.8% relative gain over the best baseline. Further analysis attributes the improvement to effective adaptive treatment of both in-distribution and out-of-distribution examples.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 14699
Loading