Score: 0

VLM-driven Skill Selection for Robotic Assembly Tasks

Published: November 7, 2025 | arXiv ID: 2511.05680v1

By: Jeong-Jung Kim, Doo-Yeol Koh, Chang-Hyun Kim

Potential Business Impact:

Robot builds things by watching and listening.

Business Areas:
Robotics Hardware, Science and Engineering, Software

This paper presents a robotic assembly framework that combines Vision-Language Models (VLMs) with imitation learning for assembly manipulation tasks. Our system employs a gripper-equipped robot that moves in 3D space to perform assembly operations. The framework integrates visual perception, natural language understanding, and learned primitive skills to enable flexible and adaptive robotic manipulation. Experimental results demonstrate the effectiveness of our approach in assembly scenarios, achieving high success rates while maintaining interpretability through the structured primitive skill decomposition.

Page Count
6 pages

Category
Computer Science:
Robotics