Score: 1

Multimodal Benchmarking and Recommendation of Text-to-Image Generation Models

Published: May 6, 2025 | arXiv ID: 2505.04650v1

By: Kapil Wanaskar, Gaytri Jena, Magdalini Eirinaki

Potential Business Impact:

Makes AI pictures better with more details.

Business Areas:
Visual Search Internet Services

This work presents an open-source unified benchmarking and evaluation framework for text-to-image generation models, with a particular focus on the impact of metadata augmented prompts. Leveraging the DeepFashion-MultiModal dataset, we assess generated outputs through a comprehensive set of quantitative metrics, including Weighted Score, CLIP (Contrastive Language Image Pre-training)-based similarity, LPIPS (Learned Perceptual Image Patch Similarity), FID (Frechet Inception Distance), and retrieval-based measures, as well as qualitative analysis. Our results demonstrate that structured metadata enrichments greatly enhance visual realism, semantic fidelity, and model robustness across diverse text-to-image architectures. While not a traditional recommender system, our framework enables task-specific recommendations for model selection and prompt design based on evaluation metrics.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Graphics