Selective Rotary Position Embedding
By: Sajad Movahedi , Timur Carstensen , Arshia Afzal and more
Potential Business Impact:
Makes AI better at remembering and understanding long stories.
Position information is essential for language modeling. In softmax transformers, Rotary Position Embeddings (\textit{RoPE}) encode positions through \textit{fixed-angle} rotations, while in linear transformers, order is handled via input-dependent (selective) gating that decays past key-value associations. Selectivity has generally been shown to improve language-related tasks. Inspired by this, we introduce \textit{Selective RoPE}, an \textit{input-dependent} rotary embedding mechanism, that generalizes \textit{RoPE}, and enables rotation in \textit{arbitrary angles} for both linear and softmax transformers. We show that softmax attention already performs a hidden form of these rotations on query-key pairs, uncovering an implicit positional structure. We further show that in state-space models and gated linear transformers, the real part manages forgetting while the imaginary part encodes positions through rotations. We validate our method by equipping gated transformers with \textit{Selective RoPE}, demonstrating that its input-dependent rotations improve performance in language modeling and on difficult sequence tasks like copying, state tracking, and retrieval.
Similar Papers
Context-aware Rotary Position Embedding
Computation and Language
Makes AI understand word order better.
Benchmarking Rotary Position Embeddings for Automatic Speech Recognition
Computation and Language
Makes speech recognition faster and better.
The Rotary Position Embedding May Cause Dimension Inefficiency in Attention Heads for Long-Distance Retrieval
Computation and Language
Helps computers understand long stories better.