Score: 1

Beyond Accuracy: Metrics that Uncover What Makes a 'Good' Visual Descriptor

Published: July 4, 2025 | arXiv ID: 2507.03542v2

By: Ethan Lin , Linxi Zhao , Atharva Sehgal and more

Potential Business Impact:

Helps computers understand pictures better with words.

Business Areas:
Image Recognition Data and Analytics, Software

Text-based visual descriptors--ranging from simple class names to more descriptive phrases--are widely used in visual concept discovery and image classification with vision-language models (VLMs). Their effectiveness, however, depends on a complex interplay of factors, including semantic clarity, presence in the VLM's pre-training data, and how well the descriptors serve as a meaningful representation space. In this work, we systematically analyze descriptor quality along two key dimensions: (1) representational capacity, and (2) relationship with VLM pre-training data. We evaluate a spectrum of descriptor generation methods, from zero-shot LLM-generated prompts to iteratively refined descriptors. Motivated by ideas from representation alignment and language understanding, we introduce two alignment-based metrics--Global Alignment and CLIP Similarity--that move beyond accuracy. These metrics shed light on how different descriptor generation strategies interact with foundation model properties, offering new ways to study descriptor effectiveness beyond accuracy evaluations.

Country of Origin
🇺🇸 United States


Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition