Score: 0

AllSummedUp: un framework open-source pour comparer les metriques d'evaluation de resume

Published: August 29, 2025 | arXiv ID: 2508.21389v1

By: Tanguy Herserant, Vincent Guigue

Potential Business Impact:

Makes computer summaries more trustworthy and fair.

Business Areas:
Text Analytics Data and Analytics, Software

This paper investigates reproducibility challenges in automatic text summarization evaluation. Based on experiments conducted across six representative metrics ranging from classical approaches like ROUGE to recent LLM-based methods (G-Eval, SEval-Ex), we highlight significant discrepancies between reported performances in the literature and those observed in our experimental setting. We introduce a unified, open-source framework, applied to the SummEval dataset and designed to support fair and transparent comparison of evaluation metrics. Our results reveal a structural trade-off: metrics with the highest alignment with human judgments tend to be computationally intensive and less stable across runs. Beyond comparative analysis, this study highlights key concerns about relying on LLMs for evaluation, stressing their randomness, technical dependencies, and limited reproducibility. We advocate for more robust evaluation protocols including exhaustive documentation and methodological standardization to ensure greater reliability in automatic summarization assessment.

Country of Origin
🇫🇷 France

Page Count
11 pages

Category
Computer Science:
Computation and Language