LumiGen: An LVLM-Enhanced Iterative Framework for Fine-Grained Text-to-Image Generation
By: Xiaoqi Dong , Xiangyu Zhou , Nicholas Evans and more
Potential Business Impact:
Makes AI draw pictures exactly as you describe.
Text-to-Image (T2I) generation has made significant advancements with diffusion models, yet challenges persist in handling complex instructions, ensuring fine-grained content control, and maintaining deep semantic consistency. Existing T2I models often struggle with tasks like accurate text rendering, precise pose generation, or intricate compositional coherence. Concurrently, Vision-Language Models (LVLMs) have demonstrated powerful capabilities in cross-modal understanding and instruction following. We propose LumiGen, a novel LVLM-enhanced iterative framework designed to elevate T2I model performance, particularly in areas requiring fine-grained control, through a closed-loop, LVLM-driven feedback mechanism. LumiGen comprises an Intelligent Prompt Parsing & Augmentation (IPPA) module for proactive prompt enhancement and an Iterative Visual Feedback & Refinement (IVFR) module, which acts as a "visual critic" to iteratively correct and optimize generated images. Evaluated on the challenging LongBench-T2I Benchmark, LumiGen achieves a superior average score of 3.08, outperforming state-of-the-art baselines. Notably, our framework demonstrates significant improvements in critical dimensions such as text rendering and pose expression, validating the effectiveness of LVLM integration for more controllable and higher-quality image generation.
Similar Papers
An LLM-LVLM Driven Agent for Iterative and Fine-Grained Image Editing
CV and Pattern Recognition
Lets you change pictures by talking to them.
Lumina-OmniLV: A Unified Multimodal Framework for General Low-Level Vision
CV and Pattern Recognition
Makes pictures better in many ways.
LumiX: Structured and Coherent Text-to-Intrinsic Generation
CV and Pattern Recognition
Creates realistic 3D scenes from text descriptions.