Surprisal reveals diversity gaps in image captioning and different scorers change the story
By: Nikolai Ilinykh, Simon Dobnik
Potential Business Impact:
Makes AI describe pictures more like people.
We quantify linguistic diversity in image captioning with surprisal variance - the spread of token-level negative log-probabilities within a caption set. On the MSCOCO test set, we compare five state-of-the-art vision-and-language LLMs, decoded with greedy and nucleus sampling, to human captions. Measured with a caption-trained n-gram LM, humans display roughly twice the surprisal variance of models, but rescoring the same captions with a general-language model reverses the pattern. Our analysis introduces the surprisal-based diversity metric for image captioning. We show that relying on a single scorer can completely invert conclusions, thus, robust diversity evaluation must report surprisal under several scorers.
Similar Papers
Surprisal and Metaphor Novelty: Moderate Correlations and Divergent Scaling Effects
Computation and Language
Helps computers understand new, creative word uses.
Surprisal and Metaphor Novelty: Moderate Correlations and Divergent Scaling Effects
Computation and Language
Helps computers understand new, creative word uses.
Black-box Detection of LLM-generated Text Using Generalized Jensen-Shannon Divergence
Machine Learning (CS)
Finds fake writing made by computers.