ShoppingComp: Are LLMs Really Ready for Your Shopping Cart?
By: Huaixiao Tou , Ying Zeng , Cong Ma and more
Potential Business Impact:
Tests if shopping AI gives safe and good advice.
We present ShoppingComp, a challenging real-world benchmark for rigorously evaluating LLM-powered shopping agents on three core capabilities: precise product retrieval, expert-level report generation, and safety critical decision making. Unlike prior e-commerce benchmarks, ShoppingComp introduces highly complex tasks under the principle of guaranteeing real products and ensuring easy verifiability, adding a novel evaluation dimension for identifying product safety hazards alongside recommendation accuracy and report quality. The benchmark comprises 120 tasks and 1,026 scenarios, curated by 35 experts to reflect authentic shopping needs. Results reveal stark limitations of current LLMs: even state-of-the-art models achieve low performance (e.g., 11.22% for GPT-5, 3.92% for Gemini-2.5-Flash). These findings highlight a substantial gap between research benchmarks and real-world deployment, where LLMs make critical errors such as failure to identify unsafe product usage or falling for promotional misinformation, leading to harmful recommendations. ShoppingComp fills the gap and thus establishes a new standard for advancing reliable and practical agents in e-commerce.
Similar Papers
WebMall -- A Multi-Shop Benchmark for Evaluating Web Agents
Computation and Language
Helps online shoppers find best deals automatically.
ShoppingBench: A Real-World Intent-Grounded Shopping Benchmark for LLM-based Agents
Computation and Language
Helps online shoppers do harder tasks.
Learning to Comparison-Shop
Information Retrieval
Helps online shoppers find better deals faster.