Holonorm
By: Daryl Noupa Yongueng, Hamidou Tembine
Potential Business Impact:
Makes AI smarter and more stable.
Normalization is a key point in transformer training . In Dynamic Tanh (DyT), the author demonstrated that Tanh can be used as an alternative layer normalization (LN) and confirmed the effectiveness of the idea. But Tanh itself faces orthogonality, linearity and distortion problems. Due to that, his proposition cannot be reliable. So we propose a Holonorm (hn) which has residual connections and nonlinearity. Holonorm is suitable for replacing Tanh in the context of normalization. Although the HoloNorm expression could be similar to the softsign function in dimension one, softsign is a componentwise function which is not good for tensors and vectors of great dimension. Holonorm preserves the orthogonality, the direction, the invertibility of the signal. Holonorm is also a suitable metric, maps all vectors into the open unit ball. This prevents exploding activations and improves stability in deep Transformer models. In this work, we have meticulously examined the normalization in transformers and say that Holonorm, a generalized form of softsign function suited as a normalization function first.Second, defined between 0 and 1 hn serves as a percentage, and $1 - \text{Holonorm}$ is its complement, making it better understandable in evaluating a model.
Similar Papers
Transformers without Normalization
Machine Learning (CS)
Makes computer brains work better without extra steps.
Stronger Normalization-Free Transformers
Machine Learning (CS)
New math trick helps computers learn better.
Optimal normalization in quantum-classical hybrid models for anti-cancer drug response prediction
Machine Learning (CS)
Helps find cancer drugs faster using quantum computers.