Knowledge graph-based personalized multimodal recommendation fusion framework
By: Yu Fang
Potential Business Impact:
Helps computers understand what you like better.
In the contemporary age characterized by information abundance, rapid advancements in artificial intelligence have rendered recommendation systems indispensable. Conventional recommendation methodologies based on collaborative filtering or individual attributes encounter deficiencies in capturing nuanced user interests. Knowledge graphs and multimodal data integration offer enhanced representations of users and items with greater richness and precision. This paper reviews existing multimodal knowledge graph recommendation frameworks, identifying shortcomings in modal interaction and higher-order dependency modeling. We propose the Cross-Graph Cross-Modal Mutual Information-Driven Unified Knowledge Graph Learning and Recommendation Framework (CrossGMMI-DUKGLR), which employs pre-trained visual-text alignment models for feature extraction, achieves fine-grained modality fusion through multi-head cross-attention, and propagates higher-order adjacency information via graph attention networks.
Similar Papers
Causal Inspired Multi Modal Recommendation
Information Retrieval
Fixes online shopping picks by ignoring fake trends.
LLM4Rec: Large Language Models for Multimodal Generative Recommendation with Causal Debiasing
Information Retrieval
Shows you movies and products you'll like.
Gated Multimodal Graph Learning for Personalized Recommendation
Information Retrieval
Helps online stores show you better stuff.