Dream4D: Lifting Camera-Controlled I2V towards Spatiotemporally Consistent 4D Generation
By: Xiaoyan Liu, Kangrui Li, Jiaxin Liu
Potential Business Impact:
Creates realistic 3D videos from one picture.
The synthesis of spatiotemporally coherent 4D content presents fundamental challenges in computer vision, requiring simultaneous modeling of high-fidelity spatial representations and physically plausible temporal dynamics. Current approaches often struggle to maintain view consistency while handling complex scene dynamics, particularly in large-scale environments with multiple interacting elements. This work introduces Dream4D, a novel framework that bridges this gap through a synergy of controllable video generation and neural 4D reconstruction. Our approach seamlessly combines a two-stage architecture: it first predicts optimal camera trajectories from a single image using few-shot learning, then generates geometrically consistent multi-view sequences via a specialized pose-conditioned diffusion process, which are finally converted into a persistent 4D representation. This framework is the first to leverage both rich temporal priors from video diffusion models and geometric awareness of the reconstruction models, which significantly facilitates 4D generation and shows higher quality (e.g., mPSNR, mSSIM) over existing methods.
Similar Papers
BulletTime: Decoupled Control of Time and Camera Pose for Video Generation
CV and Pattern Recognition
Lets you change what happens and where the camera looks.
SEE4D: Pose-Free 4D Generation via Auto-Regressive Video Inpainting
CV and Pattern Recognition
Creates 3D videos from regular videos.
Joint 3D Geometry Reconstruction and Motion Generation for 4D Synthesis from a Single Image
CV and Pattern Recognition
Makes one picture move and change like a video.