Score: 0

Long-Video Audio Synthesis with Multi-Agent Collaboration

Published: March 13, 2025 | arXiv ID: 2503.10719v2

By: Yehang Zhang , Xinli Xu , Xiaojie Xu and more

Potential Business Impact:

Makes movies talk with matching voices.

Business Areas:
Video Editing Content and Publishing, Media and Entertainment, Video

Video-to-audio synthesis, which generates synchronized audio for visual content, critically enhances viewer immersion and narrative coherence in film and interactive media. However, video-to-audio dubbing for long-form content remains an unsolved challenge due to dynamic semantic shifts, temporal misalignment, and the absence of dedicated datasets. While existing methods excel in short videos, they falter in long scenarios (e.g., movies) due to fragmented synthesis and inadequate cross-scene consistency. We propose LVAS-Agent, a novel multi-agent framework that emulates professional dubbing workflows through collaborative role specialization. Our approach decomposes long-video synthesis into four steps including scene segmentation, script generation, sound design and audio synthesis. Central innovations include a discussion-correction mechanism for scene/script refinement and a generation-retrieval loop for temporal-semantic alignment. To enable systematic evaluation, we introduce LVAS-Bench, the first benchmark with 207 professionally curated long videos spanning diverse scenarios. Experiments demonstrate superior audio-visual alignment over baseline methods. Project page: https://lvas-agent.github.io

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition