ChronoDreamer: Action-Conditioned World Model as an Online Simulator for Robotic Planning
By: Zhenhao Zhou, Dan Negrut
We present ChronoDreamer, an action-conditioned world model for contact-rich robotic manipulation. Given a history of egocentric RGB frames, contact maps, actions, and joint states, ChronoDreamer predicts future video frames, contact distributions, and joint angles via a spatial-temporal transformer trained with MaskGIT-style masked prediction. Contact is encoded as depth-weighted Gaussian splat images that render 3D forces into a camera-aligned format suitable for vision backbones. At inference, predicted rollouts are evaluated by a vision-language model that reasons about collision likelihood, enabling rejection sampling of unsafe actions before execution. We train and evaluate on DreamerBench, a simulation dataset generated with Project Chrono that provides synchronized RGB, contact splat, proprioception, and physics annotations across rigid and deformable object scenarios. Qualitative results demonstrate that the model preserves spatial coherence during non-contact motion and generates plausible contact predictions, while the LLM-based judge distinguishes collision from non-collision trajectories.
Similar Papers
FlowDreamer: A RGB-D World Model with Flow-based Motion Representations for Robot Manipulation
Robotics
Helps robots learn to move objects by seeing.
Ego-Vision World Model for Humanoid Contact Planning
Robotics
Robots learn to touch and use things safely.
ManipDreamer: Boosting Robotic Manipulation World Model with Action Tree and Visual Guidance
Robotics
Robots follow instructions better, making videos look real.