Are Large Vision Language Models Good Game Players?
By: Xinyu Wang, Bohan Zhuang, Qi Wu
Potential Business Impact:
Tests AI's smarts with fun games.
Large Vision Language Models (LVLMs) have demonstrated remarkable abilities in understanding and reasoning about both visual and textual information. However, existing evaluation methods for LVLMs, primarily based on benchmarks like Visual Question Answering and image captioning, often fail to capture the full scope of LVLMs' capabilities. These benchmarks are limited by issues such as inadequate assessment of detailed visual perception, data contamination, and a lack of focus on multi-turn reasoning. To address these challenges, we propose \method{}, a game-based evaluation framework designed to provide a comprehensive assessment of LVLMs' cognitive and reasoning skills in structured environments. \method{} uses a set of games to evaluate LVLMs on four core tasks: Perceiving, Question Answering, Rule Following, and End-to-End Playing, with each target task designed to assess specific abilities, including visual perception, reasoning, decision-making, etc. Based on this framework, we conduct extensive experiments that explore the limitations of current LVLMs, such as handling long structured outputs and perceiving detailed and dense elements. Code and data are publicly available at https://github.com/xinke-wang/LVLM-Playground.
Similar Papers
Where do Large Vision-Language Models Look at when Answering Questions?
CV and Pattern Recognition
Shows where computers look to answer questions.
IQBench: How "Smart'' Are Vision-Language Models? A Study with Human IQ Tests
CV and Pattern Recognition
Tests computers' smarts on picture puzzles.
VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models
CV and Pattern Recognition
Tests AI's ability to explain and fix its mistakes.