Score: 2

When More Words Say Less: Decoupling Length and Specificity in Image Description Evaluation

Published: January 8, 2026 | arXiv ID: 2601.04609v1

By: Rhea Kapur, Robert Hawkins, Elisa Kreiss

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes picture descriptions shorter and smarter.

Business Areas:
Visual Search Internet Services

Vision-language models (VLMs) are increasingly used to make visual content accessible via text-based descriptions. In current systems, however, description specificity is often conflated with their length. We argue that these two concepts must be disentangled: descriptions can be concise yet dense with information, or lengthy yet vacuous. We define specificity relative to a contrast set, where a description is more specific to the extent that it picks out the target image better than other possible images. We construct a dataset that controls for length while varying information content, and validate that people reliably prefer more specific descriptions regardless of length. We find that controlling for length alone cannot account for differences in specificity: how the length budget is allocated makes a difference. These results support evaluation approaches that directly prioritize specificity over verbosity.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Computation and Language