Training-free Conditional Image Embedding Framework Leveraging Large Vision Language Models
By: Masayuki Kawarada , Kosuke Yamada , Antonio Tejero-de-Pablos and more
Potential Business Impact:
Makes pictures describe specific things you ask.
Conditional image embeddings are feature representations that focus on specific aspects of an image indicated by a given textual condition (e.g., color, genre), which has been a challenging problem. Although recent vision foundation models, such as CLIP, offer rich representations of images, they are not designed to focus on a specified condition. In this paper, we propose DIOR, a method that leverages a large vision-language model (LVLM) to generate conditional image embeddings. DIOR is a training-free approach that prompts the LVLM to describe an image with a single word related to a given condition. The hidden state vector of the LVLM's last token is then extracted as the conditional image embedding. DIOR provides a versatile solution that can be applied to any image and condition without additional training or task-specific priors. Comprehensive experimental results on conditional image similarity tasks demonstrate that DIOR outperforms existing training-free baselines, including CLIP. Furthermore, DIOR achieves superior performance compared to methods that require additional training across multiple settings.
Similar Papers
Can Synthetic Images Serve as Effective and Efficient Class Prototypes?
CV and Pattern Recognition
Teaches computers to see with just words.
Vision-Language Model Guided Image Restoration
CV and Pattern Recognition
Fixes blurry pictures using words and images.
Embedding the Teacher: Distilling vLLM Preferences for Scalable Image Retrieval
Information Retrieval
Finds personalized gifts using smart computer vision.