Pivot-ICL: Adaptive Exemplar Selection for In-Context Learning

ACL ARR 2026 January Submission6598 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: in-context learning, reasoning, large language model, adaptive treatment
Abstract: In-context learning (ICL) enables LLMs to better complete complex tasks, but the key to success relies on selecting the right exemplars. Prior work often selects exemplars based on similarity to the test input, but this can fail when no close match exists—especially for challenging or out-of-distribution examples. In such cases, generic exemplars that broadly represent the task may be more helpful. This raises two key questions: (1) Which exemplars best represent the overall task distribution? (2) Which test examples are too far from available exemplars? We propose Pivot-ICL, a method that adaptively selects either dynamic, input-specific exemplars or static, generic exemplars. We model this decision using a bipartite graph between test examples and candidate exemplars, applying well-established graph mining algorithms such as Hyperlink-Induced Topic Search (HITS) to enable weighted, iterative, and bidirectional scoring. This graph-based voting identifies both representative exemplars and which test inputs require them.Experiments show Pivot-ICL consistently improves performance across tasks, achieving a +8.8% relative gain over the best baseline.
Paper Type: Short
Research Area: Question Answering
Research Area Keywords: in-context learning, reasoning, large language model, adaptive treatment
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 6598
Loading