LLM4Rec: Large Language Models for Multimodal Generative Recommendation with Causal Debiasing
By: Bo Ma , Hang Li , ZeHua Hu and more
Potential Business Impact:
Shows you movies and products you'll like.
Contemporary generative recommendation systems face significant challenges in handling multimodal data, eliminating algorithmic biases, and providing transparent decision-making processes. This paper introduces an enhanced generative recommendation framework that addresses these limitations through five key innovations: multimodal fusion architecture, retrieval-augmented generation mechanisms, causal inference-based debiasing, explainable recommendation generation, and real-time adaptive learning capabilities. Our framework leverages advanced large language models as the backbone while incorporating specialized modules for cross-modal understanding, contextual knowledge integration, bias mitigation, explanation synthesis, and continuous model adaptation. Extensive experiments on three benchmark datasets (MovieLens-25M, Amazon-Electronics, Yelp-2023) demonstrate consistent improvements in recommendation accuracy, fairness, and diversity compared to existing approaches. The proposed framework achieves up to 2.3% improvement in NDCG@10 and 1.4% enhancement in diversity metrics while maintaining computational efficiency through optimized inference strategies.
Similar Papers
Knowledge graph-based personalized multimodal recommendation fusion framework
Information Retrieval
Helps computers understand what you like better.
A Survey on Generative Recommendation: Data, Model, and Tasks
Information Retrieval
Helps computers suggest things you'll like.
Causal Inspired Multi Modal Recommendation
Information Retrieval
Fixes online shopping picks by ignoring fake trends.