Breaking the Conventional Forward-Backward Tie in Neural Networks: Activation Functions
By: Luigi Troiano , Francesco Gissi , Vincenzo Benedetto and more
Potential Business Impact:
Lets computers learn with simpler math.
Gradient-based neural network training traditionally enforces symmetry between forward and backward propagation, requiring activation functions to be differentiable (or sub-differentiable) and strictly monotonic in certain regions to prevent flat gradient areas. This symmetry, linking forward activations closely to backward gradients, significantly restricts the selection of activation functions, particularly excluding those with substantial flat or non-differentiable regions. In this paper, we challenge this assumption through mathematical analysis, demonstrating that precise gradient magnitudes derived from activation functions are largely redundant, provided the gradient direction is preserved. Empirical experiments conducted on foundational architectures - such as Multi-Layer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Binary Neural Networks (BNNs) - confirm that relaxing forward-backward symmetry and substituting traditional gradients with simpler or stochastic alternatives does not impair learning and may even enhance training stability and efficiency. We explicitly demonstrate that neural networks with flat or non-differentiable activation functions, such as the Heaviside step function, can be effectively trained, thereby expanding design flexibility and computational efficiency. Further empirical validation with more complex architectures remains a valuable direction for future research.
Similar Papers
Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations
Machine Learning (CS)
Makes AI learn and train more easily.
Neural networks for neurocomputing circuits: a computational study of tolerance to noise and activation function non-uniformity when machine learning materials properties
Neural and Evolutionary Computing
Makes computer brains work better despite flaws.
Developing Training Procedures for Piecewise-linear Spline Activation Functions in Neural Networks
Machine Learning (CS)
Makes computer brains learn better and faster.