Geometry-aware 4D Video Generation for Robot Manipulation
By: Zeyi Liu , Shuang Li , Eric Cousineau and more
Potential Business Impact:
Robots predict future movements from new angles.
Understanding and predicting the dynamics of the physical world can enhance a robot's ability to plan and interact effectively in complex environments. While recent video generation models have shown strong potential in modeling dynamic scenes, generating videos that are both temporally coherent and geometrically consistent across camera views remains a significant challenge. To address this, we propose a 4D video generation model that enforces multi-view 3D consistency of videos by supervising the model with cross-view pointmap alignment during training. This geometric supervision enables the model to learn a shared 3D representation of the scene, allowing it to predict future video sequences from novel viewpoints based solely on the given RGB-D observations, without requiring camera poses as inputs. Compared to existing baselines, our method produces more visually stable and spatially aligned predictions across multiple simulated and real-world robotic datasets. We further show that the predicted 4D videos can be used to recover robot end-effector trajectories using an off-the-shelf 6DoF pose tracker, supporting robust robot manipulation and generalization to novel camera viewpoints.
Similar Papers
Geo4D: Leveraging Video Generators for Geometric 4D Scene Reconstruction
CV and Pattern Recognition
Turns regular videos into 3D moving worlds.
ShapeGen4D: Towards High Quality 4D Shape Generation from Videos
CV and Pattern Recognition
Turns videos into moving 3D models.
Video4DGen: Enhancing Video and 4D Generation through Mutual Optimization
Graphics
Creates realistic moving 3D objects from videos.