Score: 0

CamC2V: Context-aware Controllable Video Generation

Published: April 8, 2025 | arXiv ID: 2504.06022v2

By: Luis Denninger, Sina Mokhtarzadeh Azar, Juergen Gall

Potential Business Impact:

Makes videos from pictures with camera movement.

Business Areas:
Computer Vision Hardware, Software

Recently, image-to-video (I2V) diffusion models have demonstrated impressive scene understanding and generative quality, incorporating image conditions to guide generation. However, these models primarily animate static images without extending beyond their provided context. Introducing additional constraints, such as camera trajectories, can enhance diversity but often degrade visual quality, limiting their applicability for tasks requiring faithful scene representation. We propose CamC2V, a context-to-video (C2V) model that integrates multiple image conditions as context with 3D constraints alongside camera control to enrich both global semantics and fine-grained visual details. This enables more coherent and context-aware video generation. Moreover, we motivate the necessity of temporal awareness for an effective context representation. Our comprehensive study on the RealEstate10K dataset demonstrates improvements in visual quality and camera controllability. We will publish our code upon acceptance.

Country of Origin
🇩🇪 Germany

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition