Geometric and Dynamic Scaling in Deep Transformers
By: Haoran Su, Chenyu You
Potential Business Impact:
Makes super-deep computer brains learn better.
Despite their empirical success, pushing Transformer architectures to extreme depth often leads to a paradoxical failure: representations become increasingly redundant, lose rank, and ultimately collapse. Existing explanations largely attribute this phenomenon to optimization instability or vanishing gradients, yet such accounts fail to explain why collapse persists even under modern normalization and initialization schemes. In this paper, we argue that the collapse of deep Transformers is fundamentally a geometric problem. Standard residual updates implicitly assume that feature accumulation is always beneficial, but offer no mechanism to constrain update directions or to erase outdated information. As depth increases, this leads to systematic drift off the semantic manifold and monotonic feature accumulation, causing representational degeneracy. We propose a unified geometric framework that addresses these failures through two orthogonal principles. First, manifold-constrained hyper-connections restrict residual updates to valid local tangent directions, preventing uncontrolled manifold drift. Second, deep delta learning introduces data-dependent, non-monotonic updates that enable reflection and erasure of redundant features rather than their unconditional accumulation. Together, these mechanisms decouple the direction and sign of feature updates, yielding a stable geometric evolution across depth. We term the resulting architecture the Manifold-Geometric Transformer (MGT). Our analysis predicts that enforcing geometric validity while allowing dynamic erasure is essential for avoiding rank collapse in ultra-deep networks. We outline an evaluation protocol for Transformers exceeding 100 layers to test the hypothesis that geometry, rather than depth itself, is the key limiting factor in deep representation learning.
Similar Papers
ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds
Machine Learning (CS)
Helps brain signals show patterns better.
The Geometry of Abstraction: Continual Learning via Recursive Quotienting
Machine Learning (CS)
Helps computers remember everything without forgetting.
Neural Collapse is Globally Optimal in Deep Regularized ResNets and Transformers
Machine Learning (CS)
Makes AI learn better and faster as it gets deeper.