TextInVision: Text and Prompt Complexity Driven Visual Text Generation Benchmark
By: Forouzan Fallah , Maitreya Patel , Agneet Chatterjee and more
Potential Business Impact:
Makes AI draw pictures with correct words.
Generating images with embedded text is crucial for the automatic production of visual and multimodal documents, such as educational materials and advertisements. However, existing diffusion-based text-to-image models often struggle to accurately embed text within images, facing challenges in spelling accuracy, contextual relevance, and visual coherence. Evaluating the ability of such models to embed text within a generated image is complicated due to the lack of comprehensive benchmarks. In this work, we introduce TextInVision, a large-scale, text and prompt complexity driven benchmark designed to evaluate the ability of diffusion models to effectively integrate visual text into images. We crafted a diverse set of prompts and texts that consider various attributes and text characteristics. Additionally, we prepared an image dataset to test Variational Autoencoder (VAE) models across different character representations, highlighting that VAE architectures can also pose challenges in text generation within diffusion frameworks. Through extensive analysis of multiple models, we identify common errors and highlight issues such as spelling inaccuracies and contextual mismatches. By pinpointing the failure points across different prompts and texts, our research lays the foundation for future advancements in AI-generated multimodal content.
Similar Papers
T2VTextBench: A Human Evaluation Benchmark for Textual Control in Video Generation Models
CV and Pattern Recognition
Makes videos show words correctly.
VSC: Visual Search Compositional Text-to-Image Diffusion Model
CV and Pattern Recognition
Makes AI draw pictures with many details correctly.
EDITOR: Effective and Interpretable Prompt Inversion for Text-to-Image Diffusion Models
CV and Pattern Recognition
Finds the words that made a picture.