MM-BrowseComp: A Comprehensive Benchmark for Multimodal Browsing Agents
By: Shilong Li , Xingyuan Bu , Wenjie Wang and more
Potential Business Impact:
Tests AI's ability to understand web pages with pictures.
AI agents with advanced reasoning and tool use capabilities have demonstrated impressive performance in web browsing for deep search. While existing benchmarks such as BrowseComp evaluate these browsing abilities, they primarily focus on textual information, overlooking the prevalence of multimodal content. To bridge this gap, we introduce MM-BrowseComp, a novel benchmark comprising 224 challenging, hand-crafted questions specifically designed to assess agents' multimodal retrieval and reasoning capabilities. These questions often incorporate images in prompts, and crucial information encountered during the search and reasoning process may also be embedded within images or videos on webpages. Consequently, methods relying solely on text prove insufficient for our benchmark. Additionally, we provide a verified checklist for each question, enabling fine-grained analysis of multimodal dependencies and reasoning paths. Our comprehensive evaluation of state-of-the-art models on MM-BrowseComp reveals that even top models like OpenAI o3 with tools achieve only 29.02\% accuracy, highlighting the suboptimal multimodal capabilities and lack of native multimodal reasoning in current models.
Similar Papers
MMSearch-Plus: A Simple Yet Challenging Benchmark for Multimodal Browsing Agents
Artificial Intelligence
Helps computers understand pictures and text together.
BrowseComp-ZH: Benchmarking Web Browsing Ability of Large Language Models in Chinese
Computation and Language
Tests AI's ability to find info on Chinese web.
BrowseComp: A Simple Yet Challenging Benchmark for Browsing Agents
Computation and Language
Tests how well computers can find hidden internet answers.