LLM Reasoning for Cold-Start Item Recommendation
By: Shijun Li , Yu Wang , Jin Wang and more
Potential Business Impact:
Helps Netflix recommend new movies you'll like.
Large Language Models (LLMs) have shown significant potential for improving recommendation systems through their inherent reasoning capabilities and extensive knowledge base. Yet, existing studies predominantly address warm-start scenarios with abundant user-item interaction data, leaving the more challenging cold-start scenarios, where sparse interactions hinder traditional collaborative filtering methods, underexplored. To address this limitation, we propose novel reasoning strategies designed for cold-start item recommendations within the Netflix domain. Our method utilizes the advanced reasoning capabilities of LLMs to effectively infer user preferences, particularly for newly introduced or rarely interacted items. We systematically evaluate supervised fine-tuning, reinforcement learning-based fine-tuning, and hybrid approaches that combine both methods to optimize recommendation performance. Extensive experiments on real-world data demonstrate significant improvements in both methodological efficacy and practical performance in cold-start recommendation contexts. Remarkably, our reasoning-based fine-tuned models outperform Netflix's production ranking model by up to 8% in certain cases.
Similar Papers
Selecting User Histories to Generate LLM Users for Cold-Start Item Recommendation
Information Retrieval
Helps new products get recommended to the right people.
LLMInit: A Free Lunch from Large Language Models for Selective Initialization of Recommendation
Information Retrieval
Makes movie suggestions better, even for new users.
Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting
Information Retrieval
Finds unfairness in computer suggestions.