Unified Text-Image Generation with Weakness-Targeted Post-Training
By: Jiahui Chen , Philippe Hansen-Estruch , Xiaochuang Han and more
Potential Business Impact:
Computers create pictures from words automatically.
Unified multimodal generation architectures that jointly produce text and images have recently emerged as a promising direction for text-to-image (T2I) synthesis. However, many existing systems rely on explicit modality switching, generating reasoning text before switching manually to image generation. This separate, sequential inference process limits cross-modal coupling and prohibits automatic multimodal generation. This work explores post-training to achieve fully unified text-image generation, where models autonomously transition from textual reasoning to visual synthesis within a single inference process. We examine the impact of joint text-image generation on T2I performance and the relative importance of each modality during post-training. We additionally explore different post-training data strategies, showing that a targeted dataset addressing specific limitations achieves superior results compared to broad image-caption corpora or benchmark-aligned data. Using offline, reward-weighted post-training with fully self-generated synthetic data, our approach enables improvements in multimodal image generation across four diverse T2I benchmarks, demonstrating the effectiveness of reward-weighting both modalities and strategically designed post-training data.
Similar Papers
Envision: Benchmarking Unified Understanding & Generation for Causal World Process Insights
CV and Pattern Recognition
Teaches computers to create stories with moving pictures.
TF-TI2I: Training-Free Text-and-Image-to-Image Generation via Multi-Modal Implicit-Context Learning in Text-to-Image Models
CV and Pattern Recognition
Makes pictures from text and other pictures.
Can Understanding and Generation Truly Benefit Together -- or Just Coexist?
CV and Pattern Recognition
Makes computers draw pictures from descriptions.