Implicit State Estimation via Video Replanning
By: Po-Chen Ko , Jiayuan Mao , Yu-Hsiang Fu and more
Potential Business Impact:
Helps robots learn from mistakes to do tasks better.
Video-based representations have gained prominence in planning and decision-making due to their ability to encode rich spatiotemporal dynamics and geometric relationships. These representations enable flexible and generalizable solutions for complex tasks such as object manipulation and navigation. However, existing video planning frameworks often struggle to adapt to failures at interaction time due to their inability to reason about uncertainties in partially observed environments. To overcome these limitations, we introduce a novel framework that integrates interaction-time data into the planning process. Our approach updates model parameters online and filters out previously failed plans during generation. This enables implicit state estimation, allowing the system to adapt dynamically without explicitly modeling unknown state variables. We evaluate our framework through extensive experiments on a new simulated manipulation benchmark, demonstrating its ability to improve replanning performance and advance the field of video-based decision-making.
Similar Papers
Scene Graph-Guided Proactive Replanning for Failure-Resilient Embodied Agent
Robotics
Robots learn to fix plans before mistakes happen.
Multi-step manipulation task and motion planning guided by video demonstration
Robotics
Robots learn to do tasks by watching videos.
Zero to Autonomy in Real-Time: Online Adaptation of Dynamics in Unstructured Environments
Robotics
Robots learn to drive on slippery ice quickly.