Keywords: Large Language Models(LLMs), Neural embeddings, Word embeddings, Semantic Encoding, Neural-collapse, Interpretability
TL;DR: We examine how next-token prediction training captures latent linguistic concepts by linking text representations (words, contexts) to a sparsity matrix that encodes corpus statistics, demonstrating how its singular factors drive semantic learning.
Abstract: Modern language models demonstrate a remarkable ability to capture linguistic meaning despite being trained solely through next-token prediction (NTP). We investigate how this conceptually simple training objective leads models to extract and encode latent semantic and grammatical concepts. Our analysis reveals that NTP optimization implicitly guides models to encode concepts via singular value decomposition (SVD) factors of a centered data-sparsity matrix that captures next-word co-occurrence patterns. While the model never explicitly constructs this matrix, learned word and context embeddings effectively factor it to capture linguistic structure. We find that the most important SVD factors are learned first during training, motivating using spectral clustering of embeddings to identify human-interpretable semantics, including both classical k-means and a new orthant-based method directly motivated by our interpretation of concepts. Overall, our work bridges distributional semantics, neural collapse geometry, and neural network training dynamics, providing insights into how NTP's implicit biases shape the emergence of meaning representations in language models.
Track: Main-Long
Submission Number: 15
Loading