Understanding Post-Training Structural Changes in Large Language Models
By: Xinyu He, Xianghui Cao
Potential Business Impact:
Changes how AI learns, making it more predictable.
Post-training fundamentally alters the behavior of large language models (LLMs), yet its impact on the internal parameter space remains poorly understood. In this work, we conduct a systematic singular value decomposition (SVD) analysis of principal linear layers in pretrained LLMs, focusing on two widely adopted post-training methods: instruction tuning and long-chain-of-thought (Long-CoT) distillation. Our analysis reveals two consistent and unexpected structural changes:(1) a near-uniform geometric scaling of singular values across layers, which theoretically modulates attention scores; and (2) highly consistent orthogonal transformations are applied to the left and right singular vectors of each matrix. Disrupting this orthogonal consistency leads to catastrophic performance degradation. Based on these findings, we propose a simple yet effective framework that interprets post-training as a reparameterization of fixed subspaces in the pretrained parameter space. Further experiments reveal that singular value scaling behaves as a secondary effect, analogous to a temperature adjustment, whereas the core functional transformation lies in the coordinated rotation of singular vectors. These results challenge the prevailing view of the parameter space in large models as a black box, uncovering the first clear regularities in how parameters evolve during training, and providing a new perspective for deeper investigation into model parameter changes.
Similar Papers
Sculpting Subspaces: Constrained Full Fine-Tuning in LLMs for Continual Learning
Machine Learning (CS)
Keeps AI smart on new tasks, not forgetting old ones.
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise Gradients
Machine Learning (CS)
Helps AI learn better from good or bad examples.
On the Geometry of Semantics in Next-token Prediction
Computation and Language
Teaches computers to understand words like humans.