Score: 1

Are Large Vision Language Models Good Game Players?

Published: March 4, 2025 | arXiv ID: 2503.02358v1

By: Xinyu Wang, Bohan Zhuang, Qi Wu

Potential Business Impact:

Tests AI's smarts with fun games.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Vision Language Models (LVLMs) have demonstrated remarkable abilities in understanding and reasoning about both visual and textual information. However, existing evaluation methods for LVLMs, primarily based on benchmarks like Visual Question Answering and image captioning, often fail to capture the full scope of LVLMs' capabilities. These benchmarks are limited by issues such as inadequate assessment of detailed visual perception, data contamination, and a lack of focus on multi-turn reasoning. To address these challenges, we propose \method{}, a game-based evaluation framework designed to provide a comprehensive assessment of LVLMs' cognitive and reasoning skills in structured environments. \method{} uses a set of games to evaluate LVLMs on four core tasks: Perceiving, Question Answering, Rule Following, and End-to-End Playing, with each target task designed to assess specific abilities, including visual perception, reasoning, decision-making, etc. Based on this framework, we conduct extensive experiments that explore the limitations of current LVLMs, such as handling long structured outputs and perceiving detailed and dense elements. Code and data are publicly available at https://github.com/xinke-wang/LVLM-Playground.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
38 pages

Category
Computer Science:
CV and Pattern Recognition