Length-Aware Rotary Position Embedding for Text-Speech Alignment
By: Hyeongju Kim , Juheon Lee , Jinhyeok Yang and more
Potential Business Impact:
Makes computer voices sound more natural.
Many recent text-to-speech (TTS) systems are built on transformer architectures and employ cross-attention mechanisms for text-speech alignment. Within these systems, rotary position embedding (RoPE) is commonly used to encode positional information in text and speech representations. In this work, we introduce length-aware RoPE (LARoPE), a simple yet effective extension of RoPE that improves text-speech alignment. Unlike RoPE, which relies on absolute indices, LARoPE computes relative distances between query and key positions using length-normalized indices. Experimental results show that LARoPE consistently outperforms RoPE, offering faster loss convergence, more accurate text-speech alignment, and higher overall TTS quality. Furthermore, LARoPE demonstrates greater resilience to variations in utterance duration and maintains stable performance in extended speech generation up to 30 seconds, whereas RoPE suffers from notable degradation. Notably, our method achieves a state-of-the-art word error rate on a standard zero-shot TTS benchmark.
Similar Papers
Selective Rotary Position Embedding
Computation and Language
Makes AI better at remembering and understanding long stories.
HoPE: Hyperbolic Rotary Positional Encoding for Stable Long-Range Dependency Modeling in Large Language Models
Computation and Language
Makes AI understand long sentences better.
LaMPE: Length-aware Multi-grained Position Encoding for Adaptive Long-context Scaling Without Training
Computation and Language
Lets AI understand much longer texts.