Score: 2

CalibCLIP: Contextual Calibration of Dominant Semantics for Text-Driven Image Retrieval

Published: October 7, 2025 | arXiv ID: 2510.05586v1

By: Bin Kang , Bin Chen , Junjie Wang and more

BigTech Affiliations: Tencent

Potential Business Impact:

Finds better pictures from text searches.

Business Areas:
Image Recognition Data and Analytics, Software

Existing Visual Language Models (VLMs) suffer structural limitations where a few low contribution tokens may excessively capture global semantics, dominating the information aggregation process and suppressing the discriminative features in text-driven image retrieval tasks. To address this, we introduce \textbf{CalibCLIP}, a training-free method designed to calibrate the suppressive effect of dominant tokens. Specifically, in the visual space, we propose the Contrastive Visual Enhancer (CVE), which decouples visual features into target and low information regions. Subsequently, it identifies dominant tokens and dynamically suppresses their representations.In the textual space, we introduce the Discriminative Concept Calibrator (DCC), which aims to differentiate between general and discriminative concepts within the text query. By mitigating the challenges posed by generic concepts and improving the representations of discriminative concepts, DCC strengthens the differentiation among similar samples. Finally, extensive experiments demonstrate consistent improvements across seven benchmarks spanning three image retrieval tasks, underscoring the effectiveness of CalibCLIP. Code is available at: https://github.com/kangbin98/CalibCLIP

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition