OneRec-Think: In-Text Reasoning for Generative Recommendation
By: Zhanyu Liu , Shiyao Wang , Xingmei Wang and more
Potential Business Impact:
Helps apps understand you better to keep you engaged.
The powerful generative capacity of Large Language Models (LLMs) has instigated a paradigm shift in recommendation. However, existing generative models (e.g., OneRec) operate as implicit predictors, critically lacking the capacity for explicit and controllable reasoning-a key advantage of LLMs. To bridge this gap, we propose OneRec-Think, a unified framework that seamlessly integrates dialogue, reasoning, and personalized recommendation. OneRec-Think incorporates: (1) Itemic Alignment: cross-modal Item-Textual Alignment for semantic grounding; (2) Reasoning Activation: Reasoning Scaffolding to activate LLM reasoning within the recommendation context; and (3) Reasoning Enhancement, where we design a recommendation-specific reward function that accounts for the multi-validity nature of user preferences. Experiments across public benchmarks show state-of-the-art performance. Moreover, our proposed "Think-Ahead" architecture enables effective industrial deployment on Kuaishou, achieving a 0.159\% gain in APP Stay Time and validating the practical efficacy of the model's explicit reasoning capability.
Similar Papers
Reason-to-Recommend: Using Interaction-of-Thought Reasoning to Enhance LLM Recommendation
Information Retrieval
Helps computers guess what you'll like better.
Think before Recommendation: Autonomous Reasoning-enhanced Recommender
Information Retrieval
Teaches computers to guess what you'll like.
MindRec: Mind-inspired Coarse-to-fine Decoding for Generative Recommendation
Information Retrieval
Suggests better things you might like.