Multi-Modal Language Models as Text-to-Image Model Evaluators
By: Jiahui Chen , Candace Ross , Reyhane Askari-Hemmat and more
Potential Business Impact:
Tests AI art generators better with fewer pictures.
The steady improvements of text-to-image (T2I) generative models lead to slow deprecation of automatic evaluation benchmarks that rely on static datasets, motivating researchers to seek alternative ways to evaluate the T2I progress. In this paper, we explore the potential of multi-modal large language models (MLLMs) as evaluator agents that interact with a T2I model, with the objective of assessing prompt-generation consistency and image aesthetics. We present Multimodal Text-to-Image Eval (MT2IE), an evaluation framework that iteratively generates prompts for evaluation, scores generated images and matches T2I evaluation of existing benchmarks with a fraction of the prompts used in existing static benchmarks. Moreover, we show that MT2IE's prompt-generation consistency scores have higher correlation with human judgment than scores previously introduced in the literature. MT2IE generates prompts that are efficient at probing T2I model performance, producing the same relative T2I model rankings as existing benchmarks while using only 1/80th the number of prompts for evaluation.
Similar Papers
Boosting Text-To-Image Generation via Multilingual Prompting in Large Multimodal Models
Computation and Language
Makes AI draw better pictures from words.
Test-time Prompt Refinement for Text-to-Image Models
Machine Learning (CS)
Fixes AI art mistakes by checking its own work.
LMM4LMM: Benchmarking and Evaluating Large-multimodal Image Generation with LMMs
CV and Pattern Recognition
Helps AI make better pictures from words.