DEAL-ICL: Robust In-Context Learning via Semantic Expansion, Alignment, and Dual-View Adaptation

ACL ARR 2026 January Submission5249 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-Context Learning, Large Language Models, Robustness, Semantic Expansion, Dual-View Adaptation, Demonstration Retrieval
Abstract: While Large Language Models (LLMs) have demonstrated impressive In-Context Learning (ICL) capabilities, their performance remains highly sensitive to demonstration quality and test-time distribution shifts. Existing approaches primarily focus on optimizing demonstration retrieval or calibration during decoding, yet they leave the model parameters frozen during inference, limiting the model's ability to fundamentally adapt to unseen queries. To bridge this gap, we propose DEAL-ICL, a robust framework designed to enhance ICL through progressive adaptation. DEAL-ICL operates in three stages: (1) semantic expansion to enrich the demonstration pool, (2) ICL-aligned supervised fine-tuning to internalize the retrieval-augmented format, and (3) a novel Dual-View Test-Time Adaptation mechanism. During inference, we construct anchor and perturbed views of the input and leverage a geometric consistency objective to dynamically update model parameters. Extensive experiments on Llama3 and Qwen2 benchmarks demonstrate that DEAL-ICL achieves state-of-the-art performance. Notably, under a challenging random retrieval setting, our method consistently outperforms contrastive decoding baselines across various natural language understanding tasks by effectively mitigating pre-training biases.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: Large Language Models, In-Context Learning, Robustness, Semantic Expansion, Dual-View Adaptation, Demonstration Retrieval
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 5249
Loading