WebCoderBench: Benchmarking Web Application Generation with Comprehensive and Interpretable Evaluation Metrics
By: Chenxu Liu , Yingjie Fu , Wei Yang and more
Potential Business Impact:
Tests computer programs that build websites.
Web applications (web apps) have become a key arena for large language models (LLMs) to demonstrate their code generation capabilities and commercial potential. However, building a benchmark for LLM-generated web apps remains challenging due to the need for real-world user requirements, generalizable evaluation metrics without relying on ground-truth implementations or test cases, and interpretable evaluation results. To address these challenges, we introduce WebCoderBench, the first real-world-collected, generalizable, and interpretable benchmark for web app generation. WebCoderBench comprises 1,572 real user requirements, covering diverse modalities and expression styles that reflect realistic user intentions. WebCoderBench provides 24 fine-grained evaluation metrics across 9 perspectives, combining rule-based and LLM-as-a-judge paradigm for fully automated, objective, and general evaluation. Moreover, WebCoderBench adopts human-preference-aligned weights over metrics to yield interpretable overall scores. Experiments across 12 representative LLMs and 2 LLM-based agents show that there exists no dominant model across all evaluation metrics, offering an opportunity for LLM developers to optimize their models in a targeted manner for a more powerful version.
Similar Papers
Web-Bench: A LLM Code Benchmark Based on Web Standards and Frameworks
Artificial Intelligence
Tests AI's ability to build websites.
WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code
Computation and Language
Tests AI's ability to build websites.
FrontendBench: A Benchmark for Evaluating LLMs on Front-End Development via Automatic Evaluation
Software Engineering
Tests computer code better for websites.