Score: 2

Structural and Disentangled Adaptation of Large Vision Language Models for Multimodal Recommendation

Published: December 7, 2025 | arXiv ID: 2512.06883v1

By: Zhongtao Rao , Peilin Zhou , Dading Chong and more

Potential Business Impact:

Helps online stores show you better stuff.

Business Areas:
Semantic Search Internet Services

Multimodal recommendation enhances accuracy by leveraging visual and textual signals, and its success largely depends on learning high-quality cross-modal representations. Recent advances in Large Vision-Language Models (LVLMs) offer unified multimodal representation learning, making them a promising backbone. However, applying LVLMs to recommendation remains challenging due to (i) representation misalignment, where domain gaps between item data and general pre-training lead to unaligned embedding spaces, and (ii) gradient conflicts during fine-tuning, where shared adapters cause interference and a lack of discriminative power. To address this, we propose SDA, a lightweight framework for Structural and Disentangled Adaptation, which integrates two components: Cross-Modal Structural Alignment (CMSA) and Modality-Disentangled Adaptation. CMSA aligns embeddings using intra-modal structures as a soft teacher, while MoDA mitigates gradient conflicts via expertized, gated low-rank paths to disentangle gradient flows. Experiments on three public Amazon datasets show SDA integrates seamlessly with existing multimodal and sequential recommenders, yielding average gains of 6.15% in Hit@10 and 8.64% in NDCG@10. It also achieves up to 12.83% and 18.70% gains on long-tail items with minimal inference overhead. Our code and full experimental results are available at https://github.com/RaoZhongtao/SDA.

Country of Origin
πŸ‡¦πŸ‡Ί πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ Australia, China, Singapore

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Information Retrieval