Keywords: Understanding high-level properties of models, Developmental interpretability
TL;DR: Most universal neurons that appear in different randomly initialized models emerge in early training steps and persists through training phase. They are a small fraction of all neurons but causes critical performance decline when ablated.
Abstract: We investigate the phenomenon of neuron universality in independently trained GPT-2 Small models, examining how these universal neurons—neurons with consistently correlated activations across models—emerge and evolve throughout training. By analyzing five GPT-2 models at three checkpoints (100k, 200k, 300k steps), we identify universal neurons through pairwise correlation analysis of activations over a dataset of 5 million tokens. Ablation experiments reveal significant functional impacts of universal neurons on model predictions, measured via loss and KL divergence. Additionally, we quantify neuron persistence, demonstrating high stability of universal neurons across training checkpoints, particularly in deeper layers. These findings suggest stable and universal representational structures emerge during neural network training.
Submission Number: 96
Loading