Simplify In-Context Learning

ICLR 2026 Conference Submission5909 Authors

15 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: in-context learning
Abstract: Traditional in-context learning (ICL) enhances the performance and capability of large language models (LLMs) primarily by optimizing decomposition strategies, reformatting, and ordering. However, when task difficulty significantly exceeds the model's capabilities, merely refining in-contexts becomes ineffective. In contrast to prior work that focuses on improving the capabilities of LLMs, we propose Simplified In-context Learning (SICL), a framework that reduces task difficulty through task decomposition. A complex task $A$ is decomposed into a sequence of subtasks $A_1, A_2, \cdots, A_m$, each less difficult than the original. When the difficulty of a subtask $A_i$ lies within the capability threshold of the LLM, LLM still achieves strong performance. SICL incorporates two complementary strategies: training-free methods that achieve rapid decomposition through clustering and uniform partitioning of the output space; and training-driven methods that adaptively determine the optimal decomposition for each query via a scoring predictor. Empirically, SICL achieves state-of-the-art (SOTA) results across six tasks, three LLMs and ten datasets, attaining the lowest mean squared error (MSE) of 0.712 and the highest accuracy of 77.0\%. We further extend SICL to generative tasks, where it achieves a ROUGE-1 of 0.270 on summarization and a BLEU of 0.254 on machine translation. Furthermore, SICL also generalizes to the vision modality, yielding a maximum accuracy of 98.8\% on image classification. Notably, SICL demonstrates consistent SOTA performance over commercial LLMs, like GPT-4o.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 5909
Loading