Keywords: Large Language Models, Neural Scaling Laws, Scaling Laws, Emergence, Language Models, ICML, metrics, evaluation
TL;DR: Emergent abilities of LLMs disappear with different metrics or better statistics.
Abstract: Recent work claims that large language models display \textit{emergent abilities}, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their \textit{sharpness}, transitioning seemingly instantaneously from not present to present, and their \textit{unpredictability}, appearing at seemingly unforeseeable model scales.
We present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due to the researcher’s choice of metric. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous, predictable changes in model performance.
We present our alternative explanation in a simple mathematical model, then test it in three ways: we (1) make, test and confirm predictions on the effect of metric choice using the InstructGPT/GPT-3 family; (2) make, test and confirm predictions about metric choices in a meta-analysis on BIG-Bench; and (3) show how to choose metrics to produce never-before-seen seemingly emergent abilities on vision tasks.
These analyses provide evidence that alleged emergent abilities disappear with different metrics or better statistics.
Our work challenging a popular conception speaks to challenges with accurately evaluating generative AI models.
Submission Number: 22
Loading