Do-Undo: Generating and Reversing Physical Actions in Vision-Language Models
By: Shweta Mahajan , Shreya Kadambi , Hoang Le and more
Potential Business Impact:
Teaches computers to understand and reverse real-world actions.
We introduce the Do-Undo task and benchmark to address a critical gap in vision-language models: understanding and generating physically plausible scene transformations driven by real-world actions. Unlike prior work focused on object-level edits, Do-Undo requires models to simulate the outcome of a physical action and then accurately reverse it, reflecting true cause-and-effect in the visual world. We curate a large-scale dataset of reversible actions from real-world videos and design a training strategy enforcing consistency for robust action grounding. Our experiments reveal that current models struggle with physical reversibility, underscoring the importance of this task for embodied AI, robotics, and physics-aware generative modeling. Do-Undo establishes an intuitive testbed for evaluating and advancing physical reasoning in multimodal systems.
Similar Papers
LatBot: Distilling Universal Latent Actions for Vision-Language-Action Models
Robotics
Teaches robots to do new jobs with little practice.
Bidirectional Action Sequence Learning for Long-term Action Anticipation with Large Language Models
CV and Pattern Recognition
Predicts future actions by looking forward and backward.
UniUGP: Unifying Understanding, Generation, and Planing For End-to-end Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars learn from more videos.