WebMMU: A Benchmark for Multimodal Multilingual Website Understanding and Code Generation
By: Rabiul Awal , Mahsa Massoud , Aarash Feizi and more
Potential Business Impact:
Helps computers build and fix websites better.
We present WebMMU, a multilingual benchmark that evaluates three core web tasks: (1) website visual question answering, (2) code editing involving HTML/CSS/JavaScript, and (3) mockup-to-code generation. Unlike prior benchmarks that treat these tasks separately, WebMMU unifies them using expert-annotated, real-world web data to assess models' abilities in complex multi-step reasoning, precise element grounding, and functional UI comprehension and coding. Our evaluation shows that while multimodal large language models (MLLMs) perform well on basic information extraction, they struggle with reasoning and grounding, editing code to preserve functionality, and generating design-to-code that maintains hierarchy and supports multilingual content. These findings reveal key limitations in current MLLMs and underscore the need for improved multimodal and cross-lingual reasoning to build future web agents capable of automating diverse web development tasks.
Similar Papers
Uni-MMMU: A Massive Multi-discipline Multimodal Unified Benchmark
CV and Pattern Recognition
Tests how well AI can see and create.
WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code
Computation and Language
Tests AI's ability to build websites.
MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models
CV and Pattern Recognition
Tests AI that understands and creates with images and words.