The Garden of Forking paths: Observing Dynamic Parameters Distribution in Large Language Models

TMLR Paper2337 Authors

05 Mar 2024 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: A substantial gap persists in understanding the reasons behind the exceptional performance of the Transformer architecture in NLP. A particularly unexplored area involves the mechanistic description of how the distribution of parameters evolves over time during training. In this work we suggest that looking at the time evolution of the statistic distribution of model parameters, and specifically at bifurcation effects, can help understanding the model quality, potentially reducing training costs and evaluation efforts and empirically showing the reasons behind the effectiveness of weights sparsification.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Florent_Krzakala1
Submission Number: 2337
Loading