User Feedback Alignment for LLM-powered Exploration in Large-scale Recommendation Systems
By: Jianling Wang , Yifan Liu , Yinghao Sun and more
Potential Business Impact:
Finds new videos you'll like, not just favorites.
Exploration, the act of broadening user experiences beyond their established preferences, is challenging in large-scale recommendation systems due to feedback loops and limited signals on user exploration patterns. Large Language Models (LLMs) offer potential solutions by leveraging their world knowledge to recommend novel content outside these loops. A key challenge is aligning LLMs with user preferences while preserving their knowledge and reasoning. To enhance planning for new user interests using LLMs, this paper introduces a novel approach that combines hierarchical planning with LLM inference-time scaling. This method aims to improve recommendation relevancy without compromising novelty. We decouple novelty and user-alignment, training separate LLMs for each objective. We then scale up the novelty-focused LLM's inference and select the best-of-n predictions using the user-aligned LLM. Live experiments demonstrate efficacy, showing significant gains in both user satisfaction (measured by watch activity and active user counts) and exploration diversity.
Similar Papers
HELM: Human-Preferred Exploration with Language Models
Robotics
Robots learn to explore where you want them to.
Serendipitous Recommendation with Multimodal LLM
Information Retrieval
Finds you cool new videos you'll love.
Bridging Collaborative Filtering and Large Language Models with Dynamic Alignment, Multimodal Fusion and Evidence-grounded Explanations
Information Retrieval
Shows you things you'll like, even if they change.