LION-DG: Layer-Informed Initialization with Deep Gradient Protocols for Accelerated Neural Network Training
By: Hyunjun Kim
Potential Business Impact:
Makes AI learn faster and better.
Weight initialization remains decisive for neural network optimization, yet existing methods are largely layer-agnostic. We study initialization for deeply-supervised architectures with auxiliary classifiers, where untrained auxiliary heads can destabilize early training through gradient interference. We propose LION-DG, a layer-informed initialization that zero-initializes auxiliary classifier heads while applying standard He-initialization to the backbone. We prove that this implements Gradient Awakening: auxiliary gradients are exactly zero at initialization, then phase in naturally as weights grow -- providing an implicit warmup without hyperparameters. Experiments on CIFAR-10 and CIFAR-100 with DenseNet-DS and ResNet-DS architectures demonstrate: (1) DenseNet-DS: +8.3% faster convergence on CIFAR-10 with comparable accuracy, (2) Hybrid approach: Combining LSUV with LION-DG achieves best accuracy (81.92% on CIFAR-10), (3) ResNet-DS: Positive speedup on CIFAR-100 (+11.3%) with side-tap auxiliary design. We identify architecture-specific trade-offs and provide clear guidelines for practitioners. LION-DG is simple, requires zero hyperparameters, and adds no computational overhead.
Similar Papers
A Good Start Matters: Enhancing Continual Learning with Data-Driven Weight Initialization
Machine Learning (CS)
Makes AI learn new things faster and better.
Depth-Aware Initialization for Stable and Efficient Neural Network Training
Machine Learning (CS)
Makes computer brains learn faster and better.
LoRA-DA: Data-Aware Initialization for Low-Rank Adaptation via Asymptotic Analysis
Machine Learning (CS)
Makes AI learn new tasks faster and better.