Towards Reliable Evaluation of Large Language Models for Multilingual and Multimodal E-Commerce Applications
By: Shuyi Xie , Ziqin Liew , Hailing Zhang and more
Potential Business Impact:
Tests online shopping AI on real customer questions.
Large Language Models (LLMs) excel on general-purpose NLP benchmarks, yet their capabilities in specialized domains remain underexplored. In e-commerce, existing evaluations-such as EcomInstruct, ChineseEcomQA, eCeLLM, and Shopping MMLU-suffer from limited task diversity (e.g., lacking product guidance and after-sales issues), limited task modalities (e.g., absence of multimodal data), synthetic or curated data, and a narrow focus on English and Chinese, leaving practitioners without reliable tools to assess models on complex, real-world shopping scenarios. We introduce EcomEval, a comprehensive multilingual and multimodal benchmark for evaluating LLMs in e-commerce. EcomEval covers six categories and 37 tasks (including 8 multimodal tasks), sourced primarily from authentic customer queries and transaction logs, reflecting the noisy and heterogeneous nature of real business interactions. To ensure both quality and scalability of reference answers, we adopt a semi-automatic pipeline in which large models draft candidate responses subsequently reviewed and modified by over 50 expert annotators with strong e-commerce and multilingual expertise. We define difficulty levels for each question and task category by averaging evaluation scores across models with different sizes and capabilities, enabling challenge-oriented and fine-grained assessment. EcomEval also spans seven languages-including five low-resource Southeast Asian languages-offering a multilingual perspective absent from prior work.
Similar Papers
More diverse more adaptive: Comprehensive Multi-task Learning for Improved LLM Domain Adaptation in E-commerce
Computation and Language
Makes online shopping smarter with more data.
MTQ-Eval: Multilingual Text Quality Evaluation for Language Models
Computation and Language
Helps computers judge good writing in many languages.
LC-Eval: A Bilingual Multi-Task Evaluation Benchmark for Long-Context Understanding
Computation and Language
Tests how well computers understand long stories.