Score: 2

Does Visual Grounding Enhance the Understanding of Embodied Knowledge in Large Language Models?

Published: October 19, 2025 | arXiv ID: 2510.16924v1

By: Zhihui Yang , Yupei Wang , Kaijie Mo and more

BigTech Affiliations: Tencent

Potential Business Impact:

Computers still can't truly understand the world.

Business Areas:
Visual Search Internet Services

Despite significant progress in multimodal language models (LMs), it remains unclear whether visual grounding enhances their understanding of embodied knowledge compared to text-only models. To address this question, we propose a novel embodied knowledge understanding benchmark based on the perceptual theory from psychology, encompassing visual, auditory, tactile, gustatory, olfactory external senses, and interoception. The benchmark assesses the models' perceptual abilities across different sensory modalities through vector comparison and question-answering tasks with over 1,700 questions. By comparing 30 state-of-the-art LMs, we surprisingly find that vision-language models (VLMs) do not outperform text-only models in either task. Moreover, the models perform significantly worse in the visual dimension compared to other sensory dimensions. Further analysis reveals that the vector representations are easily influenced by word form and frequency, and the models struggle to answer questions involving spatial perception and reasoning. Our findings underscore the need for more effective integration of embodied knowledge in LMs to enhance their understanding of the physical world.

Country of Origin
🇨🇳 China

Page Count
19 pages

Category
Computer Science:
Computation and Language