Wavy Transformer
By: Satoshi Noguchi, Yoshinobu Kawahara
Potential Business Impact:
Fixes AI confusion for better understanding.
Transformers have achieved remarkable success across natural language processing (NLP) and computer vision (CV). However, deep transformer models often suffer from an over-smoothing issue, in which token representations converge to similar values as they pass through successive transformer blocks. In this paper, we establish an equivalence between the hidden-state dynamics induced by stacked attention layers and graph neural diffusion on a complete graph. From this perspective, over-smoothing can be interpreted as a consequence of the dissipative nature of the underlying diffusion dynamics. Motivated by this physical interpretation, we propose Wavy Transformer, which consists of a novel attention layer based on second-order wavy dynamics. We also introduce a feed-forward network and a normalization layer designed to preserve the physical state-velocity relationship under the chain rule, thereby extending the transformer architecture. We further validate our proposed techniques on various transformer models for NLP and CV tasks. The results consistently demonstrate that Wavy Transformer improves performance with minimal additional parameters and no extra hyperparameter tuning.
Similar Papers
The Mean-Field Dynamics of Transformers
Machine Learning (CS)
Makes AI understand long texts better by grouping ideas.
Towards Understanding Transformers in Learning Random Walks
Machine Learning (CS)
Shows how computers learn to predict movement.
The Curved Spacetime of Transformer Architectures
Machine Learning (CS)
Makes AI understand words by bending their meanings.