Abstract: Large language models have emergent capabilities that come unexpectedly at scale, but we need a
theoretical framework to explain why and how they emerge. We prove that language models are actually
non-ergodic systems while providing a mathematical framework based on Stuart Kauffman's theory of the
adjacent possible (TAP) to explain capability emergence. Our resource-constrained TAP equation
demonstrates how architectural, training, and contextual constraints interact to shape model capabilities
through phase transitions in semantic space. We prove through experiments with three different language
models that capacities emerge through discrete transitions guided by constraint interactions and path-
dependent exploration. This framework provides a theoretical basis for understanding emergence in
language models and guides the development of architectures that can drive capability emergence.
Loading