JoVA: Unified Multimodal Learning for Joint Video-Audio Generation
By: Xiaohu Huang , Hao Zhou , Qiangpeng Yang and more
Potential Business Impact:
Makes videos talk and move realistically.
In this paper, we present JoVA, a unified framework for joint video-audio generation. Despite recent encouraging advances, existing methods face two critical limitations. First, most existing approaches can only generate ambient sounds and lack the capability to produce human speech synchronized with lip movements. Second, recent attempts at unified human video-audio generation typically rely on explicit fusion or modality-specific alignment modules, which introduce additional architecture design and weaken the model simplicity of the original transformers. To address these issues, JoVA employs joint self-attention across video and audio tokens within each transformer layer, enabling direct and efficient cross-modal interaction without the need for additional alignment modules. Furthermore, to enable high-quality lip-speech synchronization, we introduce a simple yet effective mouth-area loss based on facial keypoint detection, which enhances supervision on the critical mouth region during training without compromising architectural simplicity. Extensive experiments on benchmarks demonstrate that JoVA outperforms or is competitive with both unified and audio-driven state-of-the-art methods in lip-sync accuracy, speech quality, and overall video-audio generation fidelity. Our results establish JoVA as an elegant framework for high-quality multimodal generation.
Similar Papers
UniAVGen: Unified Audio and Video Generation with Asymmetric Cross-Modal Interactions
CV and Pattern Recognition
Makes videos match sounds perfectly.
JointAVBench: A Benchmark for Joint Audio-Visual Reasoning Evaluation
Multimedia
Tests AI that understands videos and sounds together.
UnityVideo: Unified Multi-Modal Multi-Task Learning for Enhancing World-Aware Video Generation
CV and Pattern Recognition
Makes videos understand the real world better.