Score: 1

Enhancing Multimodal Recommendations with Vision-Language Models and Information-Aware Fusion

Published: November 3, 2025 | arXiv ID: 2511.02113v2

By: Hai-Dang Kieu , Min Xu , Thanh Trung Huynh and more

Potential Business Impact:

Helps online stores show you better stuff.

Business Areas:
Visual Search Internet Services

Recent advances in multimodal recommendation (MMR) highlight the potential of integrating visual and textual content to enrich item representations. However, existing methods often rely on coarse visual features and naive fusion strategies, resulting in redundant or misaligned representations. From an information-theoretic perspective, effective fusion should balance unique, shared, and redundant modality information to preserve complementary cues. To this end, we propose VIRAL, a novel Vision-Language and Information-aware Recommendation framework that enhances multimodal fusion through two components: (i) a VLM-based visual enrichment module that generates fine-grained, title-guided descriptions for semantically aligned image representations, and (ii) an information-aware fusion module inspired by Partial Information Decomposition (PID) to disentangle and integrate complementary signals. Experiments on three Amazon datasets show that VIRAL consistently outperforms strong multimodal baselines and substantially improves the contribution of visual features.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
4 pages

Category
Computer Science:
Information Retrieval