Rethinking Nonlinearity: Trainable Gaussian Mixture Modules for Modern Neural Architectures
By: Weiguo Lu , Gangnan Yuan , Hong-kun Zhang and more
Potential Business Impact:
Makes computer brains learn better and faster.
Neural networks in general, from MLPs and CNNs to attention-based Transformers, are constructed from layers of linear combinations followed by nonlinear operations such as ReLU, Sigmoid, or Softmax. Despite their strength, these conventional designs are often limited in introducing non-linearity by the choice of activation functions. In this work, we introduce Gaussian Mixture-Inspired Nonlinear Modules (GMNM), a new class of differentiable modules that draw on the universal density approximation Gaussian mixture models (GMMs) and distance properties (metric space) of Gaussian kernal. By relaxing probabilistic constraints and adopting a flexible parameterization of Gaussian projections, GMNM can be seamlessly integrated into diverse neural architectures and trained end-to-end with gradient-based methods. Our experiments demonstrate that incorporating GMNM into architectures such as MLPs, CNNs, attention mechanisms, and LSTMs consistently improves performance over standard baselines. These results highlight GMNM's potential as a powerful and flexible module for enhancing efficiency and accuracy across a wide range of machine learning applications.
Similar Papers
Gaussian mixture layers for neural networks
Machine Learning (CS)
Makes AI learn better with new kinds of layers.
uGMM-NN: Univariate Gaussian Mixture Model Neural Network
Machine Learning (CS)
Makes computers understand and guess better.
Transformers as Unsupervised Learning Algorithms: A study on Gaussian Mixtures
Machine Learning (CS)
Teaches computers to learn without examples.