In-Context Audio Control of Video Diffusion Transformers
By: Wenze Liu , Weicai Ye , Minghong Cai and more
Recent advancements in video generation have seen a shift towards unified, transformer-based foundation models that can handle multiple conditional inputs in-context. However, these models have primarily focused on modalities like text, images, and depth maps, while strictly time-synchronous signals like audio have been underexplored. This paper introduces In-Context Audio Control of video diffusion transformers (ICAC), a framework that investigates the integration of audio signals for speech-driven video generation within a unified full-attention architecture, akin to FullDiT. We systematically explore three distinct mechanisms for injecting audio conditions: standard cross-attention, 2D self-attention, and unified 3D self-attention. Our findings reveal that while 3D attention offers the highest potential for capturing spatio-temporal audio-visual correlations, it presents significant training challenges. To overcome this, we propose a Masked 3D Attention mechanism that constrains the attention pattern to enforce temporal alignment, enabling stable training and superior performance. Our experiments demonstrate that this approach achieves strong lip synchronization and video quality, conditioned on an audio stream and reference images.
Similar Papers
AudCast: Audio-Driven Human Video Generation by Cascaded Diffusion Transformers
CV and Pattern Recognition
Makes people talk and move realistically in videos.
Controllable Audio-Visual Viewpoint Generation from 360° Spatial Information
Multimedia
Makes videos sound and look right from any angle.
3MDiT: Unified Tri-Modal Diffusion Transformer for Text-Driven Synchronized Audio-Video Generation
Multimedia
Makes videos and sounds match perfectly.