Score: 0

Exploring the features used for summary evaluation by Human and GPT

Published: December 22, 2025 | arXiv ID: 2512.19620v1

By: Zahra Sadeghi, Evangelos Milios, Frank Rudzicz

Potential Business Impact:

Makes AI better at judging summaries like people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Summary assessment involves evaluating how well a generated summary reflects the key ideas and meaning of the source text, requiring a deep understanding of the content. Large Language Models (LLMs) have been used to automate this process, acting as judges to evaluate summaries with respect to the original text. While previous research investigated the alignment between LLMs and Human responses, it is not yet well understood what properties or features are exploited by them when asked to evaluate based on a particular quality dimension, and there has not been much attention towards mapping between evaluation scores and metrics. In this paper, we address this issue and discover features aligned with Human and Generative Pre-trained Transformers (GPTs) responses by studying statistical and machine learning metrics. Furthermore, we show that instructing GPTs to employ metrics used by Human can improve their judgment and conforming them better with human responses.

Country of Origin
🇨🇦 Canada

Page Count
11 pages

Category
Computer Science:
Computation and Language