PrLM: Learning Explicit Reasoning for Personalized RAG via Contrastive Reward Optimization
By: Kepu Zhang , Teng Shi , Weijie Yu and more
Potential Business Impact:
Teaches computers to understand what you like.
Personalized retrieval-augmented generation (RAG) aims to produce user-tailored responses by incorporating retrieved user profiles alongside the input query. Existing methods primarily focus on improving retrieval and rely on large language models (LLMs) to implicitly integrate the retrieved context with the query. However, such models are often sensitive to retrieval quality and may generate responses that are misaligned with user preferences. To address this limitation, we propose PrLM, a reinforcement learning framework that trains LLMs to explicitly reason over retrieved user profiles. Guided by a contrastively trained personalization reward model, PrLM effectively learns from user responses without requiring annotated reasoning paths. Experiments on three personalized text generation datasets show that PrLM outperforms existing methods and remains robust across varying numbers of retrieved profiles and different retrievers.
Similar Papers
Optimizing Retrieval for RAG via Reinforced Contrastive Learning
Computation and Language
AI learns to find better information for itself.
WebRec: Enhancing LLM-based Recommendations with Attention-guided RAG from Web
Information Retrieval
Helps online shopping find better things for you.
RALLRec+: Retrieval Augmented Large Language Model Recommendation with Reasoning
Information Retrieval
Finds better things you'll like.