AllSummedUp: un framework open-source pour comparer les metriques d'evaluation de resume
By: Tanguy Herserant, Vincent Guigue
Potential Business Impact:
Makes computer summaries more trustworthy and fair.
This paper investigates reproducibility challenges in automatic text summarization evaluation. Based on experiments conducted across six representative metrics ranging from classical approaches like ROUGE to recent LLM-based methods (G-Eval, SEval-Ex), we highlight significant discrepancies between reported performances in the literature and those observed in our experimental setting. We introduce a unified, open-source framework, applied to the SummEval dataset and designed to support fair and transparent comparison of evaluation metrics. Our results reveal a structural trade-off: metrics with the highest alignment with human judgments tend to be computationally intensive and less stable across runs. Beyond comparative analysis, this study highlights key concerns about relying on LLMs for evaluation, stressing their randomness, technical dependencies, and limited reproducibility. We advocate for more robust evaluation protocols including exhaustive documentation and methodological standardization to ensure greater reliability in automatic summarization assessment.
Similar Papers
An Empirical Comparison of Text Summarization: A Multi-Dimensional Evaluation of Large Language Models
Computation and Language
Finds best AI for summarizing text.
Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages
Computation and Language
Tests how well computers summarize text.
Summarization Metrics for Spanish and Basque: Do Automatic Scores and LLM-Judges Correlate with Humans?
Computation and Language
Tests how well computers summarize Spanish and Basque text.