Align$^3$GR: Unified Multi-Level Alignment for LLM-based Generative Recommendation
By: Wencai Ye , Mingjie Sun , Shuhang Chen and more
Potential Business Impact:
Recommends better things by understanding you more.
Large Language Models (LLMs) demonstrate significant advantages in leveraging structured world knowledge and multi-step reasoning capabilities. However, fundamental challenges arise when transforming LLMs into real-world recommender systems due to semantic and behavioral misalignment. To bridge this gap, we propose Align$^3$GR, a novel framework that unifies token-level, behavior modeling-level, and preference-level alignment. Our approach introduces: Dual tokenization fusing user-item semantic and collaborative signals. Enhanced behavior modeling with bidirectional semantic alignment. Progressive DPO strategy combining self-play (SP-DPO) and real-world feedback (RF-DPO) for dynamic preference adaptation. Experiments show Align$^3$GR outperforms the SOTA baseline by +17.8% in Recall@10 and +20.2% in NDCG@10 on the public dataset, with significant gains in online A/B tests and full-scale deployment on an industrial large-scale recommendation platform.
Similar Papers
Align$^3$GR: Unified Multi-Level Alignment for LLM-based Generative Recommendation
Information Retrieval
Helps online stores show you better stuff.
Generative Reasoning Recommendation via LLMs
Information Retrieval
Helps computers suggest things you'll like.
Rank-GRPO: Training LLM-based Conversational Recommender Systems with Reinforcement Learning
Information Retrieval
Helps chatbots recommend real products better.