Revisiting scalable sequential recommendation with Multi-Embedding Approach and Mixture-of-Experts
By: Qiushi Pan , Hao Wang , Guoyuan An and more
Potential Business Impact:
Shows you more things you might like.
In recommendation systems, how to effectively scale up recommendation models has been an essential research topic. While significant progress has been made in developing advanced and scalable architectures for sequential recommendation(SR) models, there are still challenges due to items' multi-faceted characteristics and dynamic item relevance in the user context. To address these issues, we propose Fuxi-MME, a framework that integrates a multi-embedding strategy with a Mixture-of-Experts (MoE) architecture. Specifically, to efficiently capture diverse item characteristics in a decoupled manner, we decompose the conventional single embedding matrix into several lower-dimensional embedding matrices. Additionally, by substituting relevant parameters in the Fuxi Block with an MoE layer, our model achieves adaptive and specialized transformation of the enriched representations. Empirical results on public datasets show that our proposed framework outperforms several competitive baselines.
Similar Papers
HyMoERec: Hybrid Mixture-of-Experts for Sequential Recommendation
Information Retrieval
Suggests better movies and products you'll like.
Empowering Large Language Model for Sequential Recommendation via Multimodal Embeddings and Semantic IDs
Information Retrieval
Helps online stores show you better stuff.
Breaking the MoE LLM Trilemma: Dynamic Expert Clustering with Structured Compression
Computation and Language
Makes AI smarter, faster, and use less memory.