RiemannFormer: A Framework for Attention in Curved Spaces
By: Zhongping Ji
Potential Business Impact:
Makes AI smarter by understanding words better.
This research endeavors to offer insights into unlocking the further potential of transformer-based architectures. One of the primary motivations is to offer a geometric interpretation for the attention mechanism in transformers. In our framework, the attention mainly involves metric tensors, tangent spaces, inner product, and how they relate to each other. These quantities and structures at discrete positions are intricately interconnected via the parallel transport of tangent vectors. To make the learning process more efficient, we reduce the number of parameters through ingenious predefined configurations. Moreover, we introduce an explicit mechanism to highlight a neighborhood by attenuating the remote values, given that transformers inherently neglect local inductive bias. Experimental results demonstrate that our modules deliver significant performance improvements relative to the baseline. More evaluation experiments on visual and large language models will be launched successively.
Similar Papers
The Curved Spacetime of Transformer Architectures
Machine Learning (CS)
Makes AI understand words by bending their meanings.
Revisiting Transformers with Insights from Image Filtering
CV and Pattern Recognition
Makes AI understand pictures and words better.
Attention on the Sphere
Machine Learning (CS)
Helps computers understand 3D shapes better.