Loss landscape geometry reveals stagewise development of transformers

Published: 16 Jun 2024, Last Modified: 15 Jul 2024HiLD at ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Loss landscape geometry, Local learning coefficient, Singular learning theory, Models of learning, Competition between structures, Competition between heuristics
TL;DR: We trace loss landscape geometry to divide transformer language model training into stages during which the model develops a progression of different internal structures and heuristics.
Abstract: The development of the internal structure of neural networks throughout training occurs in tandem with changes in the local geometry of the population loss. By quantifying the degeneracy of this geometry using the recently proposed Local Learning Coefficient, we show that the training process for a transformer language model can be decomposed into discrete developmental stages. We connect these stages to interpretable shifts in input–output behavior and developments in internal structure. These findings offer new insights into transformer development and underscore the crucial role of loss landscape geometry in understanding the dynamics of deep learning.
Student Paper: No
Submission Number: 41
Loading