Score: 0

KRAST: Knowledge-Augmented Robotic Action Recognition with Structured Text for Vision-Language Models

Published: September 19, 2025 | arXiv ID: 2509.16452v1

By: Son Hai Nguyen , Diwei Wang , Jinhyeok Jang and more

Potential Business Impact:

Helps robots see and understand what people do.

Business Areas:
Image Recognition Data and Analytics, Software

Accurate vision-based action recognition is crucial for developing autonomous robots that can operate safely and reliably in complex, real-world environments. In this work, we advance video-based recognition of indoor daily actions for robotic perception by leveraging vision-language models (VLMs) enriched with domain-specific knowledge. We adapt a prompt-learning framework in which class-level textual descriptions of each action are embedded as learnable prompts into a frozen pre-trained VLM backbone. Several strategies for structuring and encoding these textual descriptions are designed and evaluated. Experiments on the ETRI-Activity3D dataset demonstrate that our method, using only RGB video inputs at test time, achieves over 95\% accuracy and outperforms state-of-the-art approaches. These results highlight the effectiveness of knowledge-augmented prompts in enabling robust action recognition with minimal supervision.

Country of Origin
🇫🇷 France

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition