The Effect of Document Summarization on LLM-Based Relevance Judgments
By: Samaneh Mohtadi , Kevin Roitero , Stefano Mizzaro and more
Relevance judgments are central to the evaluation of Information Retrieval (IR) systems, but obtaining them from human annotators is costly and time-consuming. Large Language Models (LLMs) have recently been proposed as automated assessors, showing promising alignment with human annotations. Most prior studies have treated documents as fixed units, feeding their full content directly to LLM assessors. We investigate how text summarization affects the reliability of LLM-based judgments and their downstream impact on IR evaluation. Using state-of-the-art LLMs across multiple TREC collections, we compare judgments made from full documents with those based on LLM-generated summaries of different lengths. We examine their agreement with human labels, their effect on retrieval effectiveness evaluation, and their influence on IR systems' ranking stability. Our findings show that summary-based judgments achieve comparable stability in systems' ranking to full-document judgments, while introducing systematic shifts in label distributions and biases that vary by model and dataset. These results highlight summarization as both an opportunity for more efficient large-scale IR evaluation and a methodological choice with important implications for the reliability of automatic judgments.
Similar Papers
Do LLM-judges Align with Human Relevance in Cranfield-style Recommender Evaluation?
Information Retrieval
Lets computers judge movie recommendations fairly.
LLM-Evaluation Tropes: Perspectives on the Validity of LLM-Evaluations
Information Retrieval
AI judges might trick us into thinking systems are good.
How Do LLM-Generated Texts Impact Term-Based Retrieval Models?
Information Retrieval
Helps search engines find real writing better.