VeLU: Variance-enhanced Learning Unit for Deep Neural Networks
By: Ashkan Shakarami , Yousef Yeganeh , Azade Farshad and more
Potential Business Impact:
Makes computer brains learn faster and better.
Activation functions are fundamental in deep neural networks and directly impact gradient flow, optimization stability, and generalization. Although ReLU remains standard because of its simplicity, it suffers from vanishing gradients and lacks adaptability. Alternatives like Swish and GELU introduce smooth transitions, but fail to dynamically adjust to input statistics. We propose VeLU, a Variance-enhanced Learning Unit as an activation function that dynamically scales based on input variance by integrating ArcTan-Sin transformations and Wasserstein-2 regularization, effectively mitigating covariate shifts and stabilizing optimization. Extensive experiments on ViT_B16, VGG19, ResNet50, DenseNet121, MobileNetV2, and EfficientNetB3 confirm VeLU's superiority over ReLU, ReLU6, Swish, and GELU on six vision benchmarks. The codes of VeLU are publicly available on GitHub.
Similar Papers
Gompertz Linear Units: Leveraging Asymmetry for Enhanced Learning Dynamics
Machine Learning (CS)
Makes computer brains learn better and faster.
The Resurrection of the ReLU
Machine Learning (CS)
Fixes broken computer learning parts.
Robust Deep Network Learning of Nonlinear Regression Tasks by Parametric Leaky Exponential Linear Units (LELUs) and a Diffusion Metric
Machine Learning (CS)
Makes computer learning better by fixing a math problem.