A Hybrid Multimodal Deep Learning Framework for Intelligent Fashion Recommendation
By: Kamand Kalashi, Babak Teimourpour
Potential Business Impact:
Helps online stores pick clothes that look good together.
The rapid expansion of online fashion platforms has created an increasing demand for intelligent recommender systems capable of understanding both visual and textual cues. This paper proposes a hybrid multimodal deep learning framework for fashion recommendation that jointly addresses two key tasks: outfit compatibility prediction and complementary item retrieval. The model leverages the visual and textual encoders of the CLIP architecture to obtain joint latent representations of fashion items, which are then integrated into a unified feature vector and processed by a transformer encoder. For compatibility prediction, an "outfit token" is introduced to model the holistic relationships among items, achieving an AUC of 0.95 on the Polyvore dataset. For complementary item retrieval, a "target item token" representing the desired item description is used to retrieve compatible items, reaching an accuracy of 69.24% under the Fill-in-the-Blank (FITB) metric. The proposed approach demonstrates strong performance across both tasks, highlighting the effectiveness of multimodal learning for fashion recommendation.
Similar Papers
A Hybrid Multimodal Deep Learning Framework for Intelligent Fashion Recommendation
Information Retrieval
Helps online stores pick clothes that look good together.
Causal Inspired Multi Modal Recommendation
Information Retrieval
Fixes online shopping picks by ignoring fake trends.
From Pixels to Posts: Retrieval-Augmented Fashion Captioning and Hashtag Generation
CV and Pattern Recognition
Creates smart fashion descriptions and hashtags.