Score: 0

Training-free Conditional Image Embedding Framework Leveraging Large Vision Language Models

Published: December 26, 2025 | arXiv ID: 2512.21860v1

By: Masayuki Kawarada , Kosuke Yamada , Antonio Tejero-de-Pablos and more

Potential Business Impact:

Makes pictures describe specific things you ask.

Business Areas:
Image Recognition Data and Analytics, Software

Conditional image embeddings are feature representations that focus on specific aspects of an image indicated by a given textual condition (e.g., color, genre), which has been a challenging problem. Although recent vision foundation models, such as CLIP, offer rich representations of images, they are not designed to focus on a specified condition. In this paper, we propose DIOR, a method that leverages a large vision-language model (LVLM) to generate conditional image embeddings. DIOR is a training-free approach that prompts the LVLM to describe an image with a single word related to a given condition. The hidden state vector of the LVLM's last token is then extracted as the conditional image embedding. DIOR provides a versatile solution that can be applied to any image and condition without additional training or task-specific priors. Comprehensive experimental results on conditional image similarity tasks demonstrate that DIOR outperforms existing training-free baselines, including CLIP. Furthermore, DIOR achieves superior performance compared to methods that require additional training across multiple settings.

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition