Interactive Implicit In-context Learning

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-context Learning; Task Vector; Large Language Models
Abstract: In-context learning (ICL) empowers large language models (LLMs) to generalize from few-shot demonstrations but faces challenges with quadratic computational complexity and unstable performance as demonstration counts increase. Implicit ICL (I$^2$CL) addresses these limitations by encoding demonstrations into a unified task vector injected during inference, achieving significant efficiency gains and order invariance. However, existing approaches prioritize inference efficiency, sacrificing interactive capabilities crucial to standard ICL—inter-demonstration and demonstration-query interactions. Substantial research underscores that these interactions are fundamental for strong reasoning performance. To bridge this critical gap, we propose Interactive Implicit In-Context Learning (I$^3$CL), a simple yet effective framework that restores such essential capabilities of standard ICL. I$^3$CL integrates only two lightweight interactive modules, preserving I$^2$CL's core efficiency with minimal computational overhead. Experimental evaluation across nine diverse tasks using four LLMs show that I$^3$CL delivers a significant average performance gain of 7.26\% over prior I$^2$CL baselines. This substantial improvement strongly indicates that restoring interactive capabilities is essential for advancing the effectiveness of implicit ICL methods.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 7608
Loading