Efficient Demonstration Selection by Label-Alignment Divergence Reranking for In-Context Learning

ACL ARR 2026 January Submission2956 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: in-context learning, few-shot classification, large language models
Abstract: In-context learning (ICL) performance is highly sensitive to which demonstrations are selected. Most existing selectors rely on semantic similarity, which can retrieve label-conflicting examples under ambiguity or noisy demonstration pools, leading to degraded performance. We propose LADR (Label-Aligned Divergence Reranking), a two-stage framework that augments TopK retrieval with label-distribution alignment. LADR fine-tunes a BERT-like classifier to estimate label distributions for the test input and retrieved candidates, and reranks them using Jensen-Shannon divergence. Candidate-side distributions are computed and cached offline, making inference-time reranking lightweight. Across seven benchmarks and multiple LLM families and scales, LADR consistently outperforms strong baselines. LADR is also robust to label permutation and reversal, as well as out-of-domain demonstration pools, and achieves a favorable accuracy-efficiency trade-off. The code is released here: https://anonymous.4open.science/r/L2D-401B
Paper Type: Long
Research Area: Information Extraction and Retrieval
Research Area Keywords: passage retrieval; re-ranking
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 2956
Loading