Score: 1

FMimic: Foundation Models are Fine-grained Action Learners from Human Videos

Published: July 28, 2025 | arXiv ID: 2507.20622v1

By: Guangyan Chen , Meiling Wang , Te Cui and more

Potential Business Impact:

Robots learn new skills from just a few videos.

Business Areas:
Simulation Software

Visual imitation learning (VIL) provides an efficient and intuitive strategy for robotic systems to acquire novel skills. Recent advancements in foundation models, particularly Vision Language Models (VLMs), have demonstrated remarkable capabilities in visual and linguistic reasoning for VIL tasks. Despite this progress, existing approaches primarily utilize these models for learning high-level plans from human demonstrations, relying on pre-defined motion primitives for executing physical interactions, which remains a major bottleneck for robotic systems. In this work, we present FMimic, a novel paradigm that harnesses foundation models to directly learn generalizable skills at even fine-grained action levels, using only a limited number of human videos. Extensive experiments demonstrate that our FMimic delivers strong performance with a single human video, and significantly outperforms all other methods with five videos. Furthermore, our method exhibits significant improvements of over 39% and 29% in RLBench multi-task experiments and real-world manipulation tasks, respectively, and exceeds baselines by more than 34% in high-precision tasks and 47% in long-horizon tasks.

Country of Origin
🇨🇳 China

Page Count
31 pages

Category
Computer Science:
Robotics