Abstract: In-context learning (ICL) facilitates large language models (LLMs) exhibiting spectacular emergent capabilities in various scenarios.
Unfortunately, introducing demonstrations easily makes the prompt length explode, bringing a significant burden to hardware.
In addition, random demonstrations usually achieve limited improvements in ICL, necessitating demonstration selection among accessible candidates.
Previous studies introduce extra modules to perform demonstration compression or selection independently.
In this paper, we propose an ICL framework UniICL, which \textbf{Uni}fies demonstration selection and compression, and final response generation via a single frozen LLM.
Specifically, UniICL first projects actual demonstrations and inference text inputs into short virtual tokens, respectively.
Then, virtual tokens are applied to select suitable demonstrations by measuring semantic similarity within latent space among candidate demonstrations and inference input.
Finally, inference text inputs together with selected virtual demonstrations are fed into the same frozen LLM for response generation.
Notably, UniICL is a parameter-efficient framework that only contains 17M trainable parameters originating from the projection layer.
We conduct experiments and analysis over in- and out-domain datasets of both generative and understanding tasks, encompassing ICL scenarios with plentiful and limited demonstration candidates.
% We conduct experiments and analysis over both in- and out-domain datasets from different scenarios.
Results show that UniICL effectively unifies demonstration selection and $12 \times$ compression with negligible inference latency, and successfully generalizes to unseen data and tasks\footnote{The code and model will be released in the final version.}.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Generation,Language Modeling
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
Section 2 Permission To Publish Peer Reviewers Content Agreement: Authors grant permission for ACL to publish peer reviewers' content
Submission Number: 370
Loading