Abstract: Large Language Models (LLMs) have demonstrated remarkable in-context learning (ICL) capabilities. In this study, we explore a surprising phenomenon related to ICL: LLMs can perform multiple, computationally distinct ICL tasks simultaneously, during a single inference call, a capability we term task superposition". We provide empirical evidence of this phenomenon across various LLM families and scales and show that this phenomenon emerges even if we train the model to in-context learn one task at a time. We offer theoretical explanations that this capability is well within the expressive power of transformers. We also explore how LLMs internally compose task vectors during superposition. Furthermore, we show that larger models can solve more ICL tasks in parallel, and better calibrate their output distribution. Our findings offer insights into the latent capabilities of LLMs, further substantiate the perspective of "LLMs as superposition of simulators", and raise questions about the mechanisms enabling simultaneous task execution.
Lay Summary: Large Language Models (LLMs) are typically trained to perform one task at a time. However, our research uncovers a surprising phenomenon: LLMs can perform multiple, computationally distinct ICL tasks simultaneously when given a single prompt -- a phenomenon we term "task superposition."
For instance, when given examples of both arithmetic problems and language translations in the same input, an LLM can correctly solve math equations and translate text at the same time. We tested this across various LLMs, including GPT-3.5 and Llama-3, and found that larger models are better at performing multiple tasks concurrently.
Our findings suggest that LLMs internally combine representations of different tasks, enabling them to perform several functions at once. We hope that our findings will contribute to understanding in-context learning mechanisms and enhance our knowledge of LLMs overall.
Primary Area: Deep Learning->Large Language Models
Keywords: task superposition, in-context learning
Submission Number: 7157
Loading