LLM-Free Image Captioning Evaluation in Reference-Flexible Settings
By: Shinnosuke Hirano , Yuiga Wada , Kazuki Matsuda and more
Potential Business Impact:
Helps computers judge picture descriptions better.
We focus on the automatic evaluation of image captions in both reference-based and reference-free settings. Existing metrics based on large language models (LLMs) favor their own generations; therefore, the neutrality is in question. Most LLM-free metrics do not suffer from such an issue, whereas they do not always demonstrate high performance. To address these issues, we propose Pearl, an LLM-free supervised metric for image captioning, which is applicable to both reference-based and reference-free settings. We introduce a novel mechanism that learns the representations of image--caption and caption--caption similarities. Furthermore, we construct a human-annotated dataset for image captioning metrics, that comprises approximately 333k human judgments collected from 2,360 annotators across over 75k images. Pearl outperformed other existing LLM-free metrics on the Composite, Flickr8K-Expert, Flickr8K-CF, Nebula, and FOIL datasets in both reference-based and reference-free settings. Our project page is available at https://pearl.kinsta.page/.
Similar Papers
Image Captioning Evaluation in the Age of Multimodal LLMs: Challenges and Future Perspectives
CV and Pattern Recognition
Helps computers describe pictures more accurately.
Multilingual Training-Free Remote Sensing Image Captioning
CV and Pattern Recognition
Lets computers describe satellite pictures in any language.
LLM-Crowdsourced: A Benchmark-Free Paradigm for Mutual Evaluation of Large Language Models
Artificial Intelligence
Tests AI better by having AI ask and answer.