AgentComp: From Agentic Reasoning to Compositional Mastery in Text-to-Image Models
By: Arman Zarei , Jiacheng Pan , Matthew Gwilliam and more
Text-to-image generative models have achieved remarkable visual quality but still struggle with compositionality$-$accurately capturing object relationships, attribute bindings, and fine-grained details in prompts. A key limitation is that models are not explicitly trained to differentiate between compositionally similar prompts and images, resulting in outputs that are close to the intended description yet deviate in fine-grained details. To address this, we propose AgentComp, a framework that explicitly trains models to better differentiate such compositional variations and enhance their reasoning ability. AgentComp leverages the reasoning and tool-use capabilities of large language models equipped with image generation, editing, and VQA tools to autonomously construct compositional datasets. Using these datasets, we apply an agentic preference optimization method to fine-tune text-to-image models, enabling them to better distinguish between compositionally similar samples and resulting in overall stronger compositional generation ability. AgentComp achieves state-of-the-art results on compositionality benchmarks such as T2I-CompBench, without compromising image quality$-$a common drawback in prior approaches$-$and even generalizes to other capabilities not explicitly trained for, such as text rendering.
Similar Papers
CompAlign: Improving Compositional Text-to-Image Generation with a Complex Benchmark and Fine-Grained Feedback
CV and Pattern Recognition
Makes AI draw pictures with many things correctly.
Easier Painting Than Thinking: Can Text-to-Image Models Set the Stage, but Not Direct the Play?
CV and Pattern Recognition
Tests how well AI makes pictures from words.
VSC: Visual Search Compositional Text-to-Image Diffusion Model
CV and Pattern Recognition
Makes AI draw pictures with many details correctly.