The Good, the Bad, and the Debatable: A Survey on the Impacts of Data for In-Context Learning

ACL ARR 2025 May Submission5489 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In-context learning is an emergent learning paradigm that enables an LLM to learn an unseen task by seeing a number of demonstrations in the context window. The quality of the demonstrations is of paramount importance as 1) context window size limitations restrict the number of demonstrations that can be presented to the model, and 2) the model must identify the task and potentially learn new, unseen input-output mappings from the limited demonstration set. An increasing body of work has also shown the sensitivity of predictions to perturbations on the demonstration set. Given this importance, this work presents a survey on the current literature pertaining to the relationship between data and in-context learning. We present our survey in three parts: the "good" -- qualities that are desirable when selecting demonstrations, the "bad" -- qualities of demonstrations that can negatively impact the model, as well as issues that can arise in presenting demonstrations, and the "debatable" -- qualities of demonstrations with mixed results or factors modulating data impacts.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: data shortcuts/artifacts, data influence
Contribution Types: Surveys
Languages Studied: English
Submission Number: 5489
Loading