CETCAM: Camera-Controllable Video Generation via Consistent and Extensible Tokenization
By: Zelin Zhao , Xinyu Gong , Bangya Liu and more
Achieving precise camera control in video generation remains challenging, as existing methods often rely on camera pose annotations that are difficult to scale to large and dynamic datasets and are frequently inconsistent with depth estimation, leading to train-test discrepancies. We introduce CETCAM, a camera-controllable video generation framework that eliminates the need for camera annotations through a consistent and extensible tokenization scheme. CETCAM leverages recent advances in geometry foundation models, such as VGGT, to estimate depth and camera parameters and converts them into unified, geometry-aware tokens. These tokens are seamlessly integrated into a pretrained video diffusion backbone via lightweight context blocks. Trained in two progressive stages, CETCAM first learns robust camera controllability from diverse raw video data and then refines fine-grained visual quality using curated high-fidelity datasets. Extensive experiments across multiple benchmarks demonstrate state-of-the-art geometric consistency, temporal stability, and visual realism. Moreover, CETCAM exhibits strong adaptability to additional control modalities, including inpainting and layout control, highlighting its flexibility beyond camera control. The project page is available at https://sjtuytc.github.io/CETCam_project_page.github.io/.
Similar Papers
Generative Photographic Control for Scene-Consistent Video Cinematic Editing
CV and Pattern Recognition
Lets you change movie look like a pro.
CamC2V: Context-aware Controllable Video Generation
CV and Pattern Recognition
Makes videos from pictures with camera movement.
PostCam: Camera-Controllable Novel-View Video Generation with Query-Shared Cross-Attention
CV and Pattern Recognition
Changes camera views in videos after they are filmed.