Exploring the features used for summary evaluation by Human and GPT
By: Zahra Sadeghi, Evangelos Milios, Frank Rudzicz
Potential Business Impact:
Makes AI better at judging summaries like people.
Summary assessment involves evaluating how well a generated summary reflects the key ideas and meaning of the source text, requiring a deep understanding of the content. Large Language Models (LLMs) have been used to automate this process, acting as judges to evaluate summaries with respect to the original text. While previous research investigated the alignment between LLMs and Human responses, it is not yet well understood what properties or features are exploited by them when asked to evaluate based on a particular quality dimension, and there has not been much attention towards mapping between evaluation scores and metrics. In this paper, we address this issue and discover features aligned with Human and Generative Pre-trained Transformers (GPTs) responses by studying statistical and machine learning metrics. Furthermore, we show that instructing GPTs to employ metrics used by Human can improve their judgment and conforming them better with human responses.
Similar Papers
LLM-as-a-Grader: Practical Insights from Large Language Model for Short-Answer and Report Evaluation
Computation and Language
Computer grades student work like a teacher.
Comparison of Large Language Models for Deployment Requirements
Computation and Language
Helps pick the best AI for your needs.
Evaluating LLM Metrics Through Real-World Capabilities
Artificial Intelligence
AI helps people with everyday tasks better.