SIGMA: Scalable Spectral Insights for LLM Collapse
By: Yi Gu , Lingyou Pang , Xiangkun Ye and more
Potential Business Impact:
Keeps AI learning from its own mistakes.
The rapid adoption of synthetic data for training Large Language Models (LLMs) has introduced the technical challenge of "model collapse"-a degenerative process where recursive training on model-generated content leads to a contraction of distributional variance and representational quality. While the phenomenology of collapse is increasingly evident, rigorous methods to quantify and predict its onset in high-dimensional spaces remain elusive. In this paper, we introduce SIGMA (Spectral Inequalities for Gram Matrix Analysis), a unified framework that benchmarks model collapse through the spectral lens of the embedding Gram matrix. By deriving and utilizing deterministic and stochastic bounds on the matrix's spectrum, SIGMA provides a mathematically grounded metric to track the contraction of the representation space. Crucially, our stochastic formulation enables scalable estimation of these bounds, making the framework applicable to large-scale foundation models where full eigendecomposition is intractable. We demonstrate that SIGMA effectively captures the transition towards degenerate states, offering both theoretical insights into the mechanics of collapse and a practical, scalable tool for monitoring the health of recursive training pipelines.
Similar Papers
The Homogeneity Trap: Spectral Collapse in Doubly-Stochastic Deep Networks
Machine Learning (CS)
Makes AI learn better by fixing a hidden math problem.
Pay Attention Later: From Vector Space Diffusion to Linearithmic Spectral Phase-Locking
Machine Learning (CS)
Lets AI learn new things without forgetting old ones.
Escaping Collapse: The Strength of Weak Data for Large Language Model Training
Machine Learning (CS)
Improves AI learning by focusing on hard problems.