Distribution-Guided Auto-Encoder for User Multimodal Interest Cross Fusion
By: Moyu Zhang , Yongxiang Tang , Yujun Jin and more
Potential Business Impact:
Helps online stores show you things you'll like.
Traditional recommendation methods rely on correlating the embedding vectors of item IDs to capture implicit collaborative filtering signals to model the user's interest in the target item. Consequently, traditional ID-based methods often encounter data sparsity problems stemming from the sparse nature of ID features. To alleviate the problem of item ID sparsity, recommendation models incorporate multimodal item information to enhance recommendation accuracy. However, existing multimodal recommendation methods typically employ early fusion approaches, which focus primarily on combining text and image features, while neglecting the contextual influence of user behavior sequences. This oversight prevents dynamic adaptation of multimodal interest representations based on behavioral patterns, consequently restricting the model's capacity to effectively capture user multimodal interests. Therefore, this paper proposes the Distribution-Guided Multimodal-Interest Auto-Encoder (DMAE), which achieves the cross fusion of user multimodal interest at the behavioral level.Ultimately, extensive experiments demonstrate the superiority of DMAE.
Similar Papers
Decoupled Multimodal Fusion for User Interest Modeling in Click-Through Rate Prediction
Information Retrieval
Shows you better stuff you might like.
Knowledge graph-based personalized multimodal recommendation fusion framework
Information Retrieval
Helps computers understand what you like better.
Causal Inspired Multi Modal Recommendation
Information Retrieval
Fixes online shopping picks by ignoring fake trends.