Enhancing Multimodal Recommendations with Vision-Language Models and Information-Aware Fusion
By: Hai-Dang Kieu , Min Xu , Thanh Trung Huynh and more
Potential Business Impact:
Helps online stores show you better stuff.
Recent advances in multimodal recommendation (MMR) highlight the potential of integrating visual and textual content to enrich item representations. However, existing methods often rely on coarse visual features and naive fusion strategies, resulting in redundant or misaligned representations. From an information-theoretic perspective, effective fusion should balance unique, shared, and redundant modality information to preserve complementary cues. To this end, we propose VIRAL, a novel Vision-Language and Information-aware Recommendation framework that enhances multimodal fusion through two components: (i) a VLM-based visual enrichment module that generates fine-grained, title-guided descriptions for semantically aligned image representations, and (ii) an information-aware fusion module inspired by Partial Information Decomposition (PID) to disentangle and integrate complementary signals. Experiments on three Amazon datasets show that VIRAL consistently outperforms strong multimodal baselines and substantially improves the contribution of visual features.
Similar Papers
Enhancing Multimodal Recommendations with Vision-Language Models and Information-Aware Fusion
Information Retrieval
Improves online shopping suggestions using pictures and words.
Visual Representation Alignment for Multimodal Large Language Models
CV and Pattern Recognition
Helps computers see details for better understanding.
Multimodal Fusion and Vision-Language Models: A Survey for Robot Vision
Robotics
Helps robots see and understand the world.