Nexus: Higher-Order Attention Mechanisms in Transformers
By: Hanting Chen , Chu Zhong , Kai Han and more
Potential Business Impact:
Makes AI understand complex ideas better.
Transformers have achieved significant success across various domains, relying on self-attention to capture dependencies. However, the standard first-order attention mechanism is often limited by a low-rank bottleneck, struggling to capture intricate, multi-hop relationships within a single layer. In this paper, we propose the \textbf{Higher-Order Attention Network (Hon)}, a novel architecture designed to enhance representational power through a recursive framework. Unlike standard approaches that use static linear projections for Queries and Keys, Hon dynamically refines these representations via nested self-attention mechanisms. Specifically, the Query and Key vectors are themselves outputs of inner attention loops, allowing tokens to aggregate global context and model high-order correlations \textit{prior} to the final attention computation. We enforce a parameter-efficient weight-sharing strategy across recursive steps, ensuring that this enhanced expressivity incurs $\mathcal{O}(1)$ additional parameters. We provide theoretical analysis demonstrating that our method breaks the linear bottleneck of standard attention. Empirically, Hon outperforms standard Transformers on multiple benchmarks.
Similar Papers
Nexus: Higher-Order Attention Mechanisms in Transformers
Computation and Language
Makes AI understand complex ideas better.
Neural Attention: A Novel Mechanism for Enhanced Expressive Power in Transformer Models
Machine Learning (CS)
Makes AI understand things better, like words and pictures.
Hierarchical Self-Attention: Generalizing Neural Attention Mechanics to Multi-Scale Problems
Machine Learning (CS)
Helps computers understand different kinds of information together.