Q-BERT4Rec: Quantized Semantic-ID Representation Learning for Multimodal Recommendation
By: Haofeng Huang, Ling Gai
Potential Business Impact:
Helps online stores guess what you'll buy next.
Sequential recommendation plays a critical role in modern online platforms such as e-commerce, advertising, and content streaming, where accurately predicting users' next interactions is essential for personalization. Recent Transformer-based methods like BERT4Rec have shown strong modeling capability, yet they still rely on discrete item IDs that lack semantic meaning and ignore rich multimodal information (e.g., text and image). This leads to weak generalization and limited interpretability. To address these challenges, we propose Q-Bert4Rec, a multimodal sequential recommendation framework that unifies semantic representation and quantized modeling. Specifically, Q-Bert4Rec consists of three stages: (1) cross-modal semantic injection, which enriches randomly initialized ID embeddings through a dynamic transformer that fuses textual, visual, and structural features; (2) semantic quantization, which discretizes fused representations into meaningful tokens via residual vector quantization; and (3) multi-mask pretraining and fine-tuning, which leverage diverse masking strategies -- span, tail, and multi-region -- to improve sequential understanding. We validate our model on public Amazon benchmarks and demonstrate that Q-Bert4Rec significantly outperforms many strong existing methods, confirming the effectiveness of semantic tokenization for multimodal sequential recommendation. Our source code will be publicly available on GitHub after publishing.
Similar Papers
Multi-Aspect Cross-modal Quantization for Generative Recommendation
Information Retrieval
Helps computers guess what you'll like next.
Multi-Aspect Cross-modal Quantization for Generative Recommendation
Information Retrieval
Helps computers suggest better things by understanding more details.
BBQRec: Behavior-Bind Quantization for Multi-Modal Sequential Recommendation
Information Retrieval
Recommends better by understanding item pictures and words.