Rethinking LLM-Based Recommendations: A Personalized Query-Driven Parallel Integration
By: Donghee Han, Hwanjun Song, Mun Yong Yi
Potential Business Impact:
Finds better movies and songs for you.
Recent studies have explored integrating large language models (LLMs) into recommendation systems but face several challenges, including training-induced bias and bottlenecks from serialized architecture. To effectively address these issues, we propose a Query-toRecommendation, a parallel recommendation framework that decouples LLMs from candidate pre-selection and instead enables direct retrieval over the entire item pool. Our framework connects LLMs and recommendation models in a parallel manner, allowing each component to independently utilize its strengths without interfering with the other. In this framework, LLMs are utilized to generate feature-enriched item descriptions and personalized user queries, allowing for capturing diverse preferences and enabling rich semantic matching in a zero-shot manner. To effectively combine the complementary strengths of LLM and collaborative signals, we introduce an adaptive reranking strategy. Extensive experiments demonstrate an improvement in performance up to 57%, while also improving the novelty and diversity of recommendations.
Similar Papers
End-to-End Personalization: Unifying Recommender Systems with Large Language Models
Information Retrieval
Suggests movies you'll love, explains why.
LLM as Explainable Re-Ranker for Recommendation System
Information Retrieval
Helps online stores show you better, clearer choices.
Preserving Privacy and Utility in LLM-Based Product Recommendations
Information Retrieval
Keeps your private info safe while suggesting cool stuff.