Enabling Versatile Controls for Video Diffusion Models
By: Xu Zhang , Hao Zhou , Haoming Qin and more
Potential Business Impact:
Makes videos follow your exact drawing instructions.
Despite substantial progress in text-to-video generation, achieving precise and flexible control over fine-grained spatiotemporal attributes remains a significant unresolved challenge in video generation research. To address these limitations, we introduce VCtrl (also termed PP-VCtrl), a novel framework designed to enable fine-grained control over pre-trained video diffusion models in a unified manner. VCtrl integrates diverse user-specified control signals-such as Canny edges, segmentation masks, and human keypoints-into pretrained video diffusion models via a generalizable conditional module capable of uniformly encoding multiple types of auxiliary signals without modifying the underlying generator. Additionally, we design a unified control signal encoding pipeline and a sparse residual connection mechanism to efficiently incorporate control representations. Comprehensive experiments and human evaluations demonstrate that VCtrl effectively enhances controllability and generation quality. The source code and pre-trained models are publicly available and implemented using the PaddlePaddle framework at http://github.com/PaddlePaddle/PaddleMIX/tree/develop/ppdiffusers/examples/ppvctrl.
Similar Papers
CtrlVDiff: Controllable Video Generation via Unified Multimodal Video Diffusion
CV and Pattern Recognition
Makes videos change appearance and content easily.
EVCtrl: Efficient Control Adapter for Visual Generation
CV and Pattern Recognition
Makes AI create pictures and videos faster.
PhysCtrl: Generative Physics for Controllable and Physics-Grounded Video Generation
CV and Pattern Recognition
Makes videos move realistically, like real objects.