Abstract: In-context learning (ICL) is a valuable capability exhibited by Transformers pretrained on diverse sequence tasks. However, previous studies have observed that ICL often conflicts with the model’s inherent in-weight learning (IWL) ability. By examining the representation space learned by a toy model in synthetic experiments, we identify the shared encoding space for context and samples in Transformers as a potential source of this conflict. To address this, we modify the model architecture to separately encode the context and samples into two distinct spaces: a \textit{task representation space} and a \textit{sample representation space}. We model these two spaces under a simple yet principled framework, assuming a linear representational structure and treating them as a pair of dual spaces. Both theoretical analysis and empirical results demonstrate the effectiveness of our proposed architecture, CoQE, in the single-value answer setting. It not only enhances ICL performance through improved representation learning, but also successfully reconciles ICL and IWL capabilities across synthetic few-shot classification and a newly designed pseudo-arithmetic task. The code is available at: \url{https://github.com/McGuinnessChen/dual-representation-space-encoding}.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We made the following revisions:
- Following the AE's request, we revised the abstract to clarify the single-value task setting and the precise experimental scope.
- We updated other statements in the main text related to the experimental scope to make them more specific.
- We improved the presentation of Figure1.
Code: https://github.com/McGuinnessChen/dual-representation-space-encoding
Supplementary Material: zip
Assigned Action Editor: ~Andrew_Kyle_Lampinen1
Submission Number: 6805
Loading