Score: 0

Implicit State Estimation via Video Replanning

Published: October 20, 2025 | arXiv ID: 2510.17315v1

By: Po-Chen Ko , Jiayuan Mao , Yu-Hsiang Fu and more

Potential Business Impact:

Helps robots learn from mistakes to do tasks better.

Business Areas:
Motion Capture Media and Entertainment, Video

Video-based representations have gained prominence in planning and decision-making due to their ability to encode rich spatiotemporal dynamics and geometric relationships. These representations enable flexible and generalizable solutions for complex tasks such as object manipulation and navigation. However, existing video planning frameworks often struggle to adapt to failures at interaction time due to their inability to reason about uncertainties in partially observed environments. To overcome these limitations, we introduce a novel framework that integrates interaction-time data into the planning process. Our approach updates model parameters online and filters out previously failed plans during generation. This enables implicit state estimation, allowing the system to adapt dynamically without explicitly modeling unknown state variables. We evaluate our framework through extensive experiments on a new simulated manipulation benchmark, demonstrating its ability to improve replanning performance and advance the field of video-based decision-making.

Page Count
23 pages

Category
Computer Science:
Robotics