ACD: Direct Conditional Control for Video Diffusion Models via Attention Supervision
By: Weiqi Li , Zehao Zhang , Liang Lin and more
Potential Business Impact:
Makes videos match what you want them to.
Controllability is a fundamental requirement in video synthesis, where accurate alignment with conditioning signals is essential. Existing classifier-free guidance methods typically achieve conditioning indirectly by modeling the joint distribution of data and conditions, which often results in limited controllability over the specified conditions. Classifier-based guidance enforces conditions through an external classifier, but the model may exploit this mechanism to raise the classifier score without genuinely satisfying the intended condition, resulting in adversarial artifacts and limited effective controllability. In this paper, we propose Attention-Conditional Diffusion (ACD), a novel framework for direct conditional control in video diffusion models via attention supervision. By aligning the model's attention maps with external control signals, ACD achieves better controllability. To support this, we introduce a sparse 3D-aware object layout as an efficient conditioning signal, along with a dedicated Layout ControlNet and an automated annotation pipeline for scalable layout integration. Extensive experiments on benchmark video generation datasets demonstrate that ACD delivers superior alignment with conditioning inputs while preserving temporal coherence and visual fidelity, establishing an effective paradigm for conditional video synthesis.
Similar Papers
In-Context Audio Control of Video Diffusion Transformers
CV and Pattern Recognition
Makes videos match spoken words perfectly.
CtrlVDiff: Controllable Video Generation via Unified Multimodal Video Diffusion
CV and Pattern Recognition
Makes videos change appearance and content easily.
DivControl: Knowledge Diversion for Controllable Image Generation
CV and Pattern Recognition
Makes AI draw pictures from many different ideas.