Score: 2

LLM-Free Image Captioning Evaluation in Reference-Flexible Settings

Published: December 25, 2025 | arXiv ID: 2512.21582v1

By: Shinnosuke Hirano , Yuiga Wada , Kazuki Matsuda and more

Potential Business Impact:

Helps computers judge picture descriptions better.

Business Areas:
Image Recognition Data and Analytics, Software

We focus on the automatic evaluation of image captions in both reference-based and reference-free settings. Existing metrics based on large language models (LLMs) favor their own generations; therefore, the neutrality is in question. Most LLM-free metrics do not suffer from such an issue, whereas they do not always demonstrate high performance. To address these issues, we propose Pearl, an LLM-free supervised metric for image captioning, which is applicable to both reference-based and reference-free settings. We introduce a novel mechanism that learns the representations of image--caption and caption--caption similarities. Furthermore, we construct a human-annotated dataset for image captioning metrics, that comprises approximately 333k human judgments collected from 2,360 annotators across over 75k images. Pearl outperformed other existing LLM-free metrics on the Composite, Flickr8K-Expert, Flickr8K-CF, Nebula, and FOIL datasets in both reference-based and reference-free settings. Our project page is available at https://pearl.kinsta.page/.

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition