Towards the Effect of Examples on In-Context Learning: A Theoretical Case Study

Published: 11 Oct 2024, Last Modified: 14 Dec 2024M3L PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: language model, in-context learning, learning theory, bayesian statistic
Abstract: In-context learning (ICL) has emerged as a powerful ability for large language models (LLMs) to adapt to new tasks by leveraging a few (demonstration) examples. Despite its effectiveness, the mechanism behind ICL remains underexplored. This paper uses a Bayesian framework to investigate how ICL integrates pre-training knowledge and examples for binary classification. In particular, we introduce a probabilistic model extending from the Gaussian mixture model to exactly quantify the impact of pre-training knowledge, label frequency, and label noise on the prediction accuracy. Based on our analysis, when the pre-training knowledge contradicts the knowledge in the examples, whether ICL prediction relies more on the pre-training knowledge or the examples depends on the number of examples. In addition, the label frequency and label noise of the examples both affect the accuracy of the ICL prediction, where the minor class has a lower accuracy and how the label error impacts the accuracy is determined by the specific error rate of the two classes. Extensive simulations are conducted to verify the correctness of the theoretical results, and real-data experiments also align with the theoretical insights. Our work reveals the dual role of pre-training knowledge and examples in ICL, offering a deeper understanding of LLMs' behaviors in classification tasks.
Is Neurips Submission: No
Submission Number: 11
Loading