CEMG: Collaborative-Enhanced Multimodal Generative Recommendation
By: Yuzhen Lin , Hongyi Chen , Xuanjing Chen and more
Generative recommendation models often struggle with two key challenges: (1) the superficial integration of collaborative signals, and (2) the decoupled fusion of multimodal features. These limitations hinder the creation of a truly holistic item representation. To overcome this, we propose CEMG, a novel Collaborative-Enhaned Multimodal Generative Recommendation framework. Our approach features a Multimodal Fusion Layer that dynamically integrates visual and textual features under the guidance of collaborative signals. Subsequently, a Unified Modality Tokenization stage employs a Residual Quantization VAE (RQ-VAE) to convert this fused representation into discrete semantic codes. Finally, in the End-to-End Generative Recommendation stage, a large language model is fine-tuned to autoregressively generate these item codes. Extensive experiments demonstrate that CEMG significantly outperforms state-of-the-art baselines.
Similar Papers
Semantic Item Graph Enhancement for Multimodal Recommendation
Information Retrieval
Helps online stores show you better stuff.
Multi-Aspect Cross-modal Quantization for Generative Recommendation
Information Retrieval
Helps computers guess what you'll like next.
Multi-Aspect Cross-modal Quantization for Generative Recommendation
Information Retrieval
Helps computers suggest better things by understanding more details.