Unifying Inductive, Cross-Domain, and Multimodal Learning for Robust and Generalizable Recommendation
By: Chanyoung Chung , Kyeongryul Lee , Sunbin Park and more
Potential Business Impact:
Recommends better things by learning from many sources.
Recommender systems have long been built upon the modeling of interactions between users and items, while recent studies have sought to broaden this paradigm by generalizing to new users and items, incorporating diverse information sources, and transferring knowledge across domains. Nevertheless, these efforts have largely focused on individual aspects, hindering their ability to tackle the complex recommendation scenarios that arise in daily consumptions across diverse domains. In this paper, we present MICRec, a unified framework that fuses inductive modeling, multimodal guidance, and cross-domain transfer to capture user contexts and latent preferences in heterogeneous and incomplete real-world data. Moving beyond the inductive backbone of INMO, our model refines expressive representations through modality-based aggregation and alleviates data sparsity by leveraging overlapping users as anchors across domains, thereby enabling robust and generalizable recommendation. Experiments show that MICRec outperforms 12 baselines, with notable gains in domains with limited training data.
Similar Papers
Causal Inspired Multi Modal Recommendation
Information Retrieval
Fixes online shopping picks by ignoring fake trends.
MLLMRec: Exploring the Potential of Multimodal Large Language Models in Recommender Systems
Information Retrieval
Suggests better movies and products you'll like.
Knowledge graph-based personalized multimodal recommendation fusion framework
Information Retrieval
Helps computers understand what you like better.