Score: 1

JoVA: Unified Multimodal Learning for Joint Video-Audio Generation

Published: December 15, 2025 | arXiv ID: 2512.13677v1

By: Xiaohu Huang , Hao Zhou , Qiangpeng Yang and more

Potential Business Impact:

Makes videos talk and move realistically.

Business Areas:
Motion Capture Media and Entertainment, Video

In this paper, we present JoVA, a unified framework for joint video-audio generation. Despite recent encouraging advances, existing methods face two critical limitations. First, most existing approaches can only generate ambient sounds and lack the capability to produce human speech synchronized with lip movements. Second, recent attempts at unified human video-audio generation typically rely on explicit fusion or modality-specific alignment modules, which introduce additional architecture design and weaken the model simplicity of the original transformers. To address these issues, JoVA employs joint self-attention across video and audio tokens within each transformer layer, enabling direct and efficient cross-modal interaction without the need for additional alignment modules. Furthermore, to enable high-quality lip-speech synchronization, we introduce a simple yet effective mouth-area loss based on facial keypoint detection, which enhances supervision on the critical mouth region during training without compromising architectural simplicity. Extensive experiments on benchmarks demonstrate that JoVA outperforms or is competitive with both unified and audio-driven state-of-the-art methods in lip-sync accuracy, speech quality, and overall video-audio generation fidelity. Our results establish JoVA as an elegant framework for high-quality multimodal generation.

Country of Origin
🇭🇰 Hong Kong

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition