Beyond Accuracy: Metrics that Uncover What Makes a 'Good' Visual Descriptor
By: Ethan Lin , Linxi Zhao , Atharva Sehgal and more
Potential Business Impact:
Helps computers understand pictures better with words.
Text-based visual descriptors--ranging from simple class names to more descriptive phrases--are widely used in visual concept discovery and image classification with vision-language models (VLMs). Their effectiveness, however, depends on a complex interplay of factors, including semantic clarity, presence in the VLM's pre-training data, and how well the descriptors serve as a meaningful representation space. In this work, we systematically analyze descriptor quality along two key dimensions: (1) representational capacity, and (2) relationship with VLM pre-training data. We evaluate a spectrum of descriptor generation methods, from zero-shot LLM-generated prompts to iteratively refined descriptors. Motivated by ideas from representation alignment and language understanding, we introduce two alignment-based metrics--Global Alignment and CLIP Similarity--that move beyond accuracy. These metrics shed light on how different descriptor generation strategies interact with foundation model properties, offering new ways to study descriptor effectiveness beyond accuracy evaluations.
Similar Papers
VCRScore: Image captioning metric based on V\&L Transformers, CLIP, and precision-recall
CV and Pattern Recognition
Makes picture descriptions more accurate.
Evaluating Robustness of Vision-Language Models Under Noisy Conditions
CV and Pattern Recognition
Tests how well AI sees and understands pictures.
DesCLIP: Robust Continual Learning via General Attribute Descriptions for VLM-Based Visual Recognition
CV and Pattern Recognition
Helps AI remember old lessons while learning new ones.