Long-Context Speech Synthesis with Context-Aware Memory
By: Zhipeng Li , Xiaofen Xing , Jingyuan Xing and more
Potential Business Impact:
Makes computer voices sound like one person talking.
In long-text speech synthesis, current approaches typically convert text to speech at the sentence-level and concatenate the results to form pseudo-paragraph-level speech. These methods overlook the contextual coherence of paragraphs, leading to reduced naturalness and inconsistencies in style and timbre across the long-form speech. To address these issues, we propose a Context-Aware Memory (CAM)-based long-context Text-to-Speech (TTS) model. The CAM block integrates and retrieves both long-term memory and local context details, enabling dynamic memory updates and transfers within long paragraphs to guide sentence-level speech synthesis. Furthermore, the prefix mask enhances the in-context learning ability by enabling bidirectional attention on prefix tokens while maintaining unidirectional generation. Experimental results demonstrate that the proposed method outperforms baseline and state-of-the-art long-context methods in terms of prosody expressiveness, coherence and context inference cost across paragraph-level speech.
Similar Papers
Evaluating Long-Term Memory for Long-Context Question Answering
Computation and Language
Helps computers remember conversations better.
Speech-Aware Long Context Pruning and Integration for Contextualized Automatic Speech Recognition
Computation and Language
Listens better to long talks, even with noise.
Whispering Context: Distilling Syntax and Semantics for Long Speech Transcripts
Computation and Language
Makes voice typing understand long talks better.