From Grounding to Manipulation: Case Studies of Foundation Model Integration in Embodied Robotic Systems
By: Xiuchao Sui , Daiying Tian , Qi Sun and more
Potential Business Impact:
Teaches robots to follow instructions and move.
Foundation models (FMs) are increasingly used to bridge language and action in embodied agents, yet the operational characteristics of different FM integration strategies remain under-explored -- particularly for complex instruction following and versatile action generation in changing environments. This paper examines three paradigms for building robotic systems: end-to-end vision-language-action (VLA) models that implicitly integrate perception and planning, and modular pipelines incorporating either vision-language models (VLMs) or multimodal large language models (LLMs). We evaluate these paradigms through two focused case studies: a complex instruction grounding task assessing fine-grained instruction understanding and cross-modal disambiguation, and an object manipulation task targeting skill transfer via VLA finetuning. Our experiments in zero-shot and few-shot settings reveal trade-offs in generalization and data efficiency. By exploring performance limits, we distill design implications for developing language-driven physical agents and outline emerging challenges and opportunities for FM-powered robotics in real-world conditions.
Similar Papers
Foundation Model Driven Robotics: A Comprehensive Review
Robotics
Robots understand and do tasks better with smart AI.
FMimic: Foundation Models are Fine-grained Action Learners from Human Videos
Robotics
Robots learn new skills from just a few videos.
Survey of Vision-Language-Action Models for Embodied Manipulation
Robotics
Robots learn to do tasks by watching and acting.