FITRep: Attention-Guided Item Representation via MLLMs
By: Guoxiao Zhang , Ao Li , Tan Qu and more
Potential Business Impact:
Finds and removes nearly identical online items.
Online platforms usually suffer from user experience degradation due to near-duplicate items with similar visuals and text. While Multimodal Large Language Models (MLLMs) enable multimodal embedding, existing methods treat representations as black boxes, ignoring structural relationships (e.g., primary vs. auxiliary elements), leading to local structural collapse problem. To address this, inspired by Feature Integration Theory (FIT), we propose FITRep, the first attention-guided, white-box item representation framework for fine-grained item deduplication. FITRep consists of: (1) Concept Hierarchical Information Extraction (CHIE), using MLLMs to extract hierarchical semantic concepts; (2) Structure-Preserving Dimensionality Reduction (SPDR), an adaptive UMAP-based method for efficient information compression; and (3) FAISS-Based Clustering (FBC), a FAISS-based clustering that assigns each item a unique cluster id using FAISS. Deployed on Meituan's advertising system, FITRep achieves +3.60% CTR and +4.25% CPM gains in online A/B tests, demonstrating both effectiveness and real-world impact.
Similar Papers
A Hybrid Multimodal Deep Learning Framework for Intelligent Fashion Recommendation
Information Retrieval
Helps online stores pick clothes that look good together.
A Hybrid Multimodal Deep Learning Framework for Intelligent Fashion Recommendation
Information Retrieval
Helps online stores pick clothes that look good together.
Learning Item Representations Directly from Multimodal Features for Effective Recommendation
Information Retrieval
Shows you better stuff you might like.