Abstract: In-context learning has become a standard learning paradigm for language models. However, current prompt engineering methods, which function within the token space, may restrict their effectiveness. We propose to explore the potential of activation space through Iterative Context Vectors (ICVs), a technique aimed at improving task performance without backpropagation. ICVs are employed by first extracting and iteratively refining activations within a language model, then applying them during inference with minimal computational and memory overhead. We evaluate ICVs across a range of tasks using various models and observe significant improvements. Our findings suggest that activation steering can serve as a promising direction for in-context learning, thereby opening new avenues for future research.
Paper Type: Short
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: NLP in resource-constrained settings
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 5605
Loading