Score: 1

Do LLM-judges Align with Human Relevance in Cranfield-style Recommender Evaluation?

Published: November 28, 2025 | arXiv ID: 2511.23312v1

By: Gustavo Penha , Aleksandr V. Petrov , Claudia Hauff and more

Potential Business Impact:

Lets computers judge movie recommendations fairly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Evaluating recommender systems remains a long-standing challenge, as offline methods based on historical user interactions and train-test splits often yield unstable and inconsistent results due to exposure bias, popularity bias, sampled evaluations, and missing-not-at-random patterns. In contrast, textual document retrieval benefits from robust, standardized evaluation via Cranfield-style test collections, which combine pooled relevance judgments with controlled setups. While recent work shows that adapting this methodology to recommender systems is feasible, constructing such collections remains costly due to the need for manual relevance judgments, thus limiting scalability. This paper investigates whether Large Language Models (LLMs) can serve as reliable automatic judges to address these scalability challenges. Using the ML-32M-ext Cranfield-style movie recommendation collection, we first examine the limitations of existing evaluation methodologies. Then we explore the alignment and the recommender systems ranking agreement between the LLM-judge and human provided relevance labels. We find that incorporating richer item metadata and longer user histories improves alignment, and that LLM-judge yields high agreement with human-based rankings (Kendall's tau = 0.87). Finally, an industrial case study in the podcast recommendation domain demonstrates the practical value of LLM-judge for model selection. Overall, our results show that LLM-judge is a viable and scalable approach for evaluating recommender systems.

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Information Retrieval