Cold-Start Recommendation with Knowledge-Guided Retrieval-Augmented Generation
By: Wooseong Yang , Weizhi Zhang , Yuqing Liu and more
Potential Business Impact:
Helps movie apps suggest new shows you'll like.
Cold-start items remain a persistent challenge in recommender systems due to their lack of historical user interactions, which collaborative models rely on. While recent zero-shot methods leverage large language models (LLMs) to address this, they often struggle with sparse metadata and hallucinated or incomplete knowledge. We propose ColdRAG, a retrieval-augmented generation approach that builds a domain-specific knowledge graph dynamically to enhance LLM-based recommendation in cold-start scenarios, without requiring task-specific fine-tuning. ColdRAG begins by converting structured item attributes into rich natural-language profiles, from which it extracts entities and relationships to construct a unified knowledge graph capturing item semantics. Given a user's interaction history, it scores edges in the graph using an LLM, retrieves candidate items with supporting evidence, and prompts the LLM to rank them. By enabling multi-hop reasoning over this graph, ColdRAG grounds recommendations in verifiable evidence, reducing hallucinations and strengthening semantic connections. Experiments on three public benchmarks demonstrate that ColdRAG surpasses existing zero-shot baselines in both Recall and NDCG. This framework offers a practical solution to cold-start recommendation by combining knowledge-graph reasoning with retrieval-augmented LLM generation.
Similar Papers
ItemRAG: Item-Based Retrieval-Augmented Generation for LLM-Based Recommendation
Information Retrieval
Helps online stores suggest better items to buy.
KERAG_R: Knowledge-Enhanced Retrieval-Augmented Generation for Recommendation
Information Retrieval
Helps computers suggest better movies and products.
Knowledge Graph Retrieval-Augmented Generation for LLM-based Recommendation
Information Retrieval
Helps online suggestions use better, newer facts.