On the Geometry of Semantics in Next-token Prediction
By: Yize Zhao, Christos Thrampoulidis
Potential Business Impact:
Teaches computers to understand words like humans.
Modern language models demonstrate a remarkable ability to capture linguistic meaning despite being trained solely through next-token prediction (NTP). We investigate how this conceptually simple training objective leads models to extract and encode latent semantic and grammatical concepts. Our analysis reveals that NTP optimization implicitly guides models to encode concepts via singular value decomposition (SVD) factors of a centered data-sparsity matrix that captures next-word co-occurrence patterns. While the model never explicitly constructs this matrix, learned word and context embeddings effectively factor it to capture linguistic structure. We find that the most important SVD factors are learned first during training, motivating the use of spectral clustering of embeddings to identify human-interpretable semantics, including both classical k-means and a new orthant-based method directly motivated by our interpretation of concepts. Overall, our work bridges distributional semantics, neural collapse geometry, and neural network training dynamics, providing insights into how NTP's implicit biases shape the emergence of meaning representations in language models.
Similar Papers
Idea-Gated Transformers: Enforcing Semantic Coherence via Differentiable Vocabulary Pruning
Computation and Language
Keeps AI writing focused on the main topic.
Exploring Next Token Prediction For Optimizing Databases
Databases
Helps computers make databases run faster.
Context-level Language Modeling by Learning Predictive Context Embeddings
Computation and Language
Makes AI understand stories better, not just words.