Spectral Manifold Regularization for Stable and Modular Routing in Deep MoE Architectures
By: Ibrahim Delibasoglu
Potential Business Impact:
Keeps AI learning new things without forgetting old ones.
Mixture of Experts (MoE) architectures enable efficient scaling of neural networks but suffer from expert collapse, where routing converges to a few dominant experts. This reduces model capacity and causes catastrophic interference during adaptation. We propose the Spectrally-Regularized Mixture of Experts (SR-MoE), which imposes geometric constraints on the routing manifold to enforce structural modularity. Our method uses dual regularization: spectral norm constraints bound routing function Lipschitz continuity, while stable rank penalties preserve high-dimensional feature diversity in expert selection. We evaluate SR-MoE across architectural scales and dataset complexities using modular one-shot adaptation tasks. Results show that traditional linear gating fails with increasing depth (accuracy drops up to 4.72% due to expert entanglement), while SR-MoE maintains structural integrity (mean interference -0.32%). Our spectral constraints facilitate positive knowledge transfer, enabling localized expert updates without global performance decay. SR-MoE provides a general solution for building high-capacity, modular networks capable of stable lifelong learning.
Similar Papers
Mixture-of-Experts with Gradient Conflict-Driven Subspace Topology Pruning for Emergent Modularity
Machine Learning (CS)
Helps AI learn better without forgetting or needing instructions.
Stable-MoE: Lyapunov-based Token Routing for Distributed Mixture-of-Experts Training over Edge Networks
Distributed, Parallel, and Cluster Computing
Makes smart devices learn faster with less power.
DualSparse-MoE: Coordinating Tensor/Neuron-Level Sparsity with Expert Partition and Reconstruction
Machine Learning (CS)
Makes smart computer programs run faster and better.