Re-Thinking the Automatic Evaluation of Image-Text Alignment in Text-to-Image Models
By: Huixuan Zhang, Xiaojun Wan
Potential Business Impact:
Makes AI pictures match words better.
Text-to-image models often struggle to generate images that precisely match textual prompts. Prior research has extensively studied the evaluation of image-text alignment in text-to-image generation. However, existing evaluations primarily focus on agreement with human assessments, neglecting other critical properties of a trustworthy evaluation framework. In this work, we first identify two key aspects that a reliable evaluation should address. We then empirically demonstrate that current mainstream evaluation frameworks fail to fully satisfy these properties across a diverse range of metrics and models. Finally, we propose recommendations for improving image-text alignment evaluation.
Similar Papers
CulturalFrames: Assessing Cultural Expectation Alignment in Text-to-Image Models and Evaluation Metrics
CV and Pattern Recognition
Makes AI art show cultures correctly.
Evaluating the Evaluators: Metrics for Compositional Text-to-Image Generation
CV and Pattern Recognition
Checks if AI pictures match words better.
Text-to-Image Alignment in Denoising-Based Models through Step Selection
CV and Pattern Recognition
Makes AI pictures match words better.