Do LLM-judges Align with Human Relevance in Cranfield-style Recommender Evaluation?
By: Gustavo Penha , Aleksandr V. Petrov , Claudia Hauff and more
Potential Business Impact:
Lets computers judge movie recommendations fairly.
Evaluating recommender systems remains a long-standing challenge, as offline methods based on historical user interactions and train-test splits often yield unstable and inconsistent results due to exposure bias, popularity bias, sampled evaluations, and missing-not-at-random patterns. In contrast, textual document retrieval benefits from robust, standardized evaluation via Cranfield-style test collections, which combine pooled relevance judgments with controlled setups. While recent work shows that adapting this methodology to recommender systems is feasible, constructing such collections remains costly due to the need for manual relevance judgments, thus limiting scalability. This paper investigates whether Large Language Models (LLMs) can serve as reliable automatic judges to address these scalability challenges. Using the ML-32M-ext Cranfield-style movie recommendation collection, we first examine the limitations of existing evaluation methodologies. Then we explore the alignment and the recommender systems ranking agreement between the LLM-judge and human provided relevance labels. We find that incorporating richer item metadata and longer user histories improves alignment, and that LLM-judge yields high agreement with human-based rankings (Kendall's tau = 0.87). Finally, an industrial case study in the podcast recommendation domain demonstrates the practical value of LLM-judge for model selection. Overall, our results show that LLM-judge is a viable and scalable approach for evaluating recommender systems.
Similar Papers
LLM-as-a-Judge: Rapid Evaluation of Legal Document Recommendation for Retrieval-Augmented Generation
Computation and Language
Lets computers judge legal AI work fairly.
LLM as Explainable Re-Ranker for Recommendation System
Information Retrieval
Helps online stores show you better, clearer choices.
An Empirical Study of LLM-as-a-Judge: How Design Choices Impact Evaluation Reliability
Computation and Language
Helps computers judge other computers' answers.