Score: 0

Rethinking Nonlinearity: Trainable Gaussian Mixture Modules for Modern Neural Architectures

Published: October 8, 2025 | arXiv ID: 2510.06660v1

By: Weiguo Lu , Gangnan Yuan , Hong-kun Zhang and more

Potential Business Impact:

Makes computer brains learn better and faster.

Business Areas:
Multi-level Marketing Sales and Marketing

Neural networks in general, from MLPs and CNNs to attention-based Transformers, are constructed from layers of linear combinations followed by nonlinear operations such as ReLU, Sigmoid, or Softmax. Despite their strength, these conventional designs are often limited in introducing non-linearity by the choice of activation functions. In this work, we introduce Gaussian Mixture-Inspired Nonlinear Modules (GMNM), a new class of differentiable modules that draw on the universal density approximation Gaussian mixture models (GMMs) and distance properties (metric space) of Gaussian kernal. By relaxing probabilistic constraints and adopting a flexible parameterization of Gaussian projections, GMNM can be seamlessly integrated into diverse neural architectures and trained end-to-end with gradient-based methods. Our experiments demonstrate that incorporating GMNM into architectures such as MLPs, CNNs, attention mechanisms, and LSTMs consistently improves performance over standard baselines. These results highlight GMNM's potential as a powerful and flexible module for enhancing efficiency and accuracy across a wide range of machine learning applications.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)