FlowDubber: Movie Dubbing with LLM-based Semantic-aware Learning and Flow Matching based Voice Enhancing
By: Gaoxiang Cong , Liang Li , Jiadong Pan and more
Potential Business Impact:
Makes movie voices match the actor's mouth.
Movie Dubbing aims to convert scripts into speeches that align with the given movie clip in both temporal and emotional aspects while preserving the vocal timbre of a given brief reference audio. Existing methods focus primarily on reducing the word error rate while ignoring the importance of lip-sync and acoustic quality. To address these issues, we propose a large language model (LLM) based flow matching architecture for dubbing, named FlowDubber, which achieves high-quality audio-visual sync and pronunciation by incorporating a large speech language model and dual contrastive aligning while achieving better acoustic quality via the proposed voice-enhanced flow matching than previous works. First, we introduce Qwen2.5 as the backbone of LLM to learn the in-context sequence from movie scripts and reference audio. Then, the proposed semantic-aware learning focuses on capturing LLM semantic knowledge at the phoneme level. Next, dual contrastive aligning (DCA) boosts mutual alignment with lip movement, reducing ambiguities where similar phonemes might be confused. Finally, the proposed Flow-based Voice Enhancing (FVE) improves acoustic quality in two aspects, which introduces an LLM-based acoustics flow matching guidance to strengthen clarity and uses affine style prior to enhance identity when recovering noise into mel-spectrograms via gradient vector field prediction. Extensive experiments demonstrate that our method outperforms several state-of-the-art methods on two primary benchmarks.
Similar Papers
Towards Authentic Movie Dubbing with Retrieve-Augmented Director-Actor Interaction Learning
Computation and Language
Makes movie voices sound like real actors.
VoiceCraft-Dub: Automated Video Dubbing with Neural Codec Language Models
CV and Pattern Recognition
Makes videos speak with matching faces.
Fine-grained Video Dubbing Duration Alignment with Segment Supervised Preference Optimization
Sound
Makes dubbed videos match the original speaking time.