ReaSeq: Unleashing World Knowledge via Reasoning for Sequential Modeling
By: Chuan Wang , Gaoming Yang , Han Wu and more
Potential Business Impact:
Helps online stores show you better things.
Industrial recommender systems face two fundamental limitations under the log-driven paradigm: (1) knowledge poverty in ID-based item representations that causes brittle interest modeling under data sparsity, and (2) systemic blindness to beyond-log user interests that constrains model performance within platform boundaries. These limitations stem from an over-reliance on shallow interaction statistics and close-looped feedback while neglecting the rich world knowledge about product semantics and cross-domain behavioral patterns that Large Language Models have learned from vast corpora. To address these challenges, we introduce ReaSeq, a reasoning-enhanced framework that leverages world knowledge in Large Language Models to address both limitations through explicit and implicit reasoning. Specifically, ReaSeq employs explicit Chain-of-Thought reasoning via multi-agent collaboration to distill structured product knowledge into semantically enriched item representations, and latent reasoning via Diffusion Large Language Models to infer plausible beyond-log behaviors. Deployed on Taobao's ranking system serving hundreds of millions of users, ReaSeq achieves substantial gains: >6.0% in IPV and CTR, >2.9% in Orders, and >2.5% in GMV, validating the effectiveness of world-knowledge-enhanced reasoning over purely log-driven approaches.
Similar Papers
Enhancing Sequential Recommendation with World Knowledge from Large Language Models
Information Retrieval
Helps online suggestions guess what you'll like next.
Intent-Guided Reasoning for Sequential Recommendation
Information Retrieval
Helps computers guess what you want next.
ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability
Information Retrieval
Helps computers rank information by thinking step-by-step.