Score: 0

Do-Undo: Generating and Reversing Physical Actions in Vision-Language Models

Published: December 15, 2025 | arXiv ID: 2512.13609v1

By: Shweta Mahajan , Shreya Kadambi , Hoang Le and more

Potential Business Impact:

Teaches computers to understand and reverse real-world actions.

Business Areas:
Motion Capture Media and Entertainment, Video

We introduce the Do-Undo task and benchmark to address a critical gap in vision-language models: understanding and generating physically plausible scene transformations driven by real-world actions. Unlike prior work focused on object-level edits, Do-Undo requires models to simulate the outcome of a physical action and then accurately reverse it, reflecting true cause-and-effect in the visual world. We curate a large-scale dataset of reversible actions from real-world videos and design a training strategy enforcing consistency for robust action grounding. Our experiments reveal that current models struggle with physical reversibility, underscoring the importance of this task for embodied AI, robotics, and physics-aware generative modeling. Do-Undo establishes an intuitive testbed for evaluating and advancing physical reasoning in multimodal systems.

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition