Do LLM-judges Align with Human Relevance in Cranfield-style Recommender Evaluation?
By: Gustavo Penha , Aleksandr V. Petrov , Claudia Hauff and more
Potential Business Impact:
Lets computers judge movie recommendations fairly.
Evaluating recommender systems remains a long-standing challenge, as offline methods based on historical user interactions and train-test splits often yield unstable and inconsistent results due to exposure bias, popularity bias, sampled evaluations, and missing-not-at-random patterns. In contrast, textual document retrieval benefits from robust, standardized evaluation via Cranfield-style test collections, which combine pooled relevance judgments with controlled setups. While recent work shows that adapting this methodology to recommender systems is feasible, constructing such collections remains costly due to the need for manual relevance judgments, thus limiting scalability. This paper investigates whether Large Language Models (LLMs) can serve as reliable automatic judges to address these scalability challenges. Using the ML-32M-ext Cranfield-style movie recommendation collection, we first examine the limitations of existing evaluation methodologies. Then we explore the alignment and the recommender systems ranking agreement between the LLM-judge and human provided relevance labels. We find that incorporating richer item metadata and longer user histories improves alignment, and that LLM-judge yields high agreement with human-based rankings (Kendall's tau = 0.87). Finally, an industrial case study in the podcast recommendation domain demonstrates the practical value of LLM-judge for model selection. Overall, our results show that LLM-judge is a viable and scalable approach for evaluating recommender systems.
Similar Papers
Fine Grained Evaluation of LLMs-as-Judges
Information Retrieval
Computers can now find important text parts.
Judging the Judges: A Collection of LLM-Generated Relevance Judgements
Information Retrieval
Computers can now judge search results faster than people.
Criteria-Based LLM Relevance Judgments
Information Retrieval
Helps computers judge search results better.