NovaFlow: Zero-Shot Manipulation via Actionable Flow from Generated Videos
By: Hongyu Li , Lingfeng Sun , Yafei Hu and more
Potential Business Impact:
Robots learn new tasks just by watching a video.
Enabling robots to execute novel manipulation tasks zero-shot is a central goal in robotics. Most existing methods assume in-distribution tasks or rely on fine-tuning with embodiment-matched data, limiting transfer across platforms. We present NovaFlow, an autonomous manipulation framework that converts a task description into an actionable plan for a target robot without any demonstrations. Given a task description, NovaFlow synthesizes a video using a video generation model and distills it into 3D actionable object flow using off-the-shelf perception modules. From the object flow, it computes relative poses for rigid objects and realizes them as robot actions via grasp proposals and trajectory optimization. For deformable objects, this flow serves as a tracking objective for model-based planning with a particle-based dynamics model. By decoupling task understanding from low-level control, NovaFlow naturally transfers across embodiments. We validate on rigid, articulated, and deformable object manipulation tasks using a table-top Franka arm and a Spot quadrupedal mobile robot, and achieve effective zero-shot execution without demonstrations or embodiment-specific training. Project website: https://novaflow.lhy.xyz/.
Similar Papers
Dream2Flow: Bridging Video Generation and Open-World Manipulation with 3D Object Flow
Robotics
Robots learn to move objects from videos.
ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow
Robotics
Robots learn to do tasks by watching videos.
3DFlowAction: Learning Cross-Embodiment Manipulation from 3D Flow World Model
Robotics
Robots learn to move objects by watching how they move.