PhysicalAgent: Towards General Cognitive Robotics with Foundation World Models
By: Artem Lykov , Jeffrin Sam , Hung Khang Nguyen and more
Potential Business Impact:
Robots learn to do tasks by watching videos.
We introduce PhysicalAgent, an agentic framework for robotic manipulation that integrates iterative reasoning, diffusion-based video generation, and closed-loop execution. Given a textual instruction, our method generates short video demonstrations of candidate trajectories, executes them on the robot, and iteratively re-plans in response to failures. This approach enables robust recovery from execution errors. We evaluate PhysicalAgent across multiple perceptual modalities (egocentric, third-person, and simulated) and robotic embodiments (bimanual UR3, Unitree G1 humanoid, simulated GR1), comparing against state-of-the-art task-specific baselines. Experiments demonstrate that our method consistently outperforms prior approaches, achieving up to 83% success on human-familiar tasks. Physical trials reveal that first-attempt success is limited (20-30%), yet iterative correction increases overall success to 80% across platforms. These results highlight the potential of video-based generative reasoning for general-purpose robotic manipulation and underscore the importance of iterative execution for recovering from initial failures. Our framework paves the way for scalable, adaptable, and robust robot control.
Similar Papers
Robot Learning from a Physical World Model
Robotics
Teaches robots to do tasks by watching generated videos.
CoinRobot: Generalized End-to-end Robotic Learning for Physical Intelligence
Robotics
Robots learn new jobs faster on different machines.
Simulating the Visual World with Artificial Intelligence: A Roadmap
Artificial Intelligence
Creates realistic videos that act like real worlds.