Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages
By: Hyangsuk Min , Yuho Lee , Minjeong Ban and more
Potential Business Impact:
Tests how well computers summarize text.
Evaluation frameworks for text summarization have evolved in terms of both domain coverage and metrics. However, existing benchmarks still lack domain-specific assessment criteria, remain predominantly English-centric, and face challenges with human annotation due to the complexity of reasoning. To address these, we introduce MSumBench, which provides a multi-dimensional, multi-domain evaluation of summarization in English and Chinese. It also incorporates specialized assessment criteria for each domain and leverages a multi-agent debate system to enhance annotation quality. By evaluating eight modern summarization models, we discover distinct performance patterns across domains and languages. We further examine large language models as summary evaluators, analyzing the correlation between their evaluation and summarization capabilities, and uncovering systematic bias in their assessment of self-generated summaries. Our benchmark dataset is publicly available at https://github.com/DISL-Lab/MSumBench.
Similar Papers
An Empirical Comparison of Text Summarization: A Multi-Dimensional Evaluation of Large Language Models
Computation and Language
Finds best AI for summarizing text.
Domain Specific Benchmarks for Evaluating Multimodal Large Language Models
Machine Learning (CS)
Organizes AI tests for different subjects.
HSSBench: Benchmarking Humanities and Social Sciences Ability for Multimodal Large Language Models
Computation and Language
Helps computers understand history and art better.