ItemRAG: Item-Based Retrieval-Augmented Generation for LLM-Based Recommendation
By: Sunwoo Kim , Geon Lee , Kyungho Kim and more
Potential Business Impact:
Helps online stores suggest better items to buy.
Recently, large language models (LLMs) have been widely used as recommender systems, owing to their strong reasoning capability and their effectiveness in handling cold-start items. To better adapt LLMs for recommendation, retrieval-augmented generation (RAG) has been incorporated. Most existing RAG methods are user-based, retrieving purchase patterns of users similar to the target user and providing them to the LLM. In this work, we propose ItemRAG, an item-based RAG method for LLM-based recommendation that retrieves relevant items (rather than users) from item-item co-purchase histories. ItemRAG helps LLMs capture co-purchase patterns among items, which are beneficial for recommendations. Especially, our retrieval strategy incorporates semantically similar items to better handle cold-start items and uses co-purchase frequencies to improve the relevance of the retrieved items. Through extensive experiments, we demonstrate that ItemRAG consistently (1) improves the zero-shot LLM-based recommender by up to 43% in Hit-Ratio-1 and (2) outperforms user-based RAG baselines under both standard and cold-start item recommendation settings.
Similar Papers
WebRec: Enhancing LLM-based Recommendations with Attention-guided RAG from Web
Information Retrieval
Helps online shopping find better things for you.
When Retrieval Succeeds and Fails: Rethinking Retrieval-Augmented Generation for LLMs
Computation and Language
Helps smart computers learn new things faster.
RAGDoll: Efficient Offloading-based Online RAG System on a Single GPU
Distributed, Parallel, and Cluster Computing
Makes smart computer answers faster on regular devices.