Dexterous World Models
By: Byungjun Kim , Taeksoo Kim , Junyoung Lee and more
Potential Business Impact:
Makes digital worlds react to your hands.
Recent progress in 3D reconstruction has made it easy to create realistic digital twins from everyday environments. However, current digital twins remain largely static and are limited to navigation and view synthesis without embodied interactivity. To bridge this gap, we introduce Dexterous World Model (DWM), a scene-action-conditioned video diffusion framework that models how dexterous human actions induce dynamic changes in static 3D scenes. Given a static 3D scene rendering and an egocentric hand motion sequence, DWM generates temporally coherent videos depicting plausible human-scene interactions. Our approach conditions video generation on (1) static scene renderings following a specified camera trajectory to ensure spatial consistency, and (2) egocentric hand mesh renderings that encode both geometry and motion cues to model action-conditioned dynamics directly. To train DWM, we construct a hybrid interaction video dataset. Synthetic egocentric interactions provide fully aligned supervision for joint locomotion and manipulation learning, while fixed-camera real-world videos contribute diverse and realistic object dynamics. Experiments demonstrate that DWM enables realistic and physically plausible interactions, such as grasping, opening, and moving objects, while maintaining camera and scene consistency. This framework represents a first step toward video diffusion-based interactive digital twins and enables embodied simulation from egocentric actions.
Similar Papers
World Models Can Leverage Human Videos for Dexterous Manipulation
Robotics
Teaches robots to move hands skillfully like humans.
Counterfactual World Models via Digital Twin-conditioned Video Diffusion
CV and Pattern Recognition
Lets AI imagine "what if" scenarios in videos.
EgoTwin: Dreaming Body and View in First Person
CV and Pattern Recognition
Creates realistic first-person videos from body movements.