Joint 3D Geometry Reconstruction and Motion Generation for 4D Synthesis from a Single Image
By: Yanran Zhang , Ziyi Wang , Wenzhao Zheng and more
Potential Business Impact:
Makes one picture move and change like a video.
Generating interactive and dynamic 4D scenes from a single static image remains a core challenge. Most existing generate-then-reconstruct and reconstruct-then-generate methods decouple geometry from motion, causing spatiotemporal inconsistencies and poor generalization. To address these, we extend the reconstruct-then-generate framework to jointly perform Motion generation and geometric Reconstruction for 4D Synthesis (MoRe4D). We first introduce TrajScene-60K, a large-scale dataset of 60,000 video samples with dense point trajectories, addressing the scarcity of high-quality 4D scene data. Based on this, we propose a diffusion-based 4D Scene Trajectory Generator (4D-STraG) to jointly generate geometrically consistent and motion-plausible 4D point trajectories. To leverage single-view priors, we design a depth-guided motion normalization strategy and a motion-aware module for effective geometry and dynamics integration. We then propose a 4D View Synthesis Module (4D-ViSM) to render videos with arbitrary camera trajectories from 4D point track representations. Experiments show that MoRe4D generates high-quality 4D scenes with multi-view consistency and rich dynamic details from a single image. Code: https://github.com/Zhangyr2022/MoRe4D.
Similar Papers
Motion4D: Learning 3D-Consistent Motion and Semantics for 4D Scene Understanding
CV and Pattern Recognition
Makes videos show 3D worlds without flickering.
Geo4D: Leveraging Video Generators for Geometric 4D Scene Reconstruction
CV and Pattern Recognition
Turns regular videos into 3D moving worlds.
SyncMV4D: Synchronized Multi-view Joint Diffusion of Appearance and Motion for Hand-Object Interaction Synthesis
CV and Pattern Recognition
Creates realistic 3D animations of people and objects.