Score: 1

Unified Text-Image Generation with Weakness-Targeted Post-Training

Published: January 7, 2026 | arXiv ID: 2601.04339v1

By: Jiahui Chen , Philippe Hansen-Estruch , Xiaochuang Han and more

BigTech Affiliations: Meta

Potential Business Impact:

Computers create pictures from words automatically.

Business Areas:
Visual Search Internet Services

Unified multimodal generation architectures that jointly produce text and images have recently emerged as a promising direction for text-to-image (T2I) synthesis. However, many existing systems rely on explicit modality switching, generating reasoning text before switching manually to image generation. This separate, sequential inference process limits cross-modal coupling and prohibits automatic multimodal generation. This work explores post-training to achieve fully unified text-image generation, where models autonomously transition from textual reasoning to visual synthesis within a single inference process. We examine the impact of joint text-image generation on T2I performance and the relative importance of each modality during post-training. We additionally explore different post-training data strategies, showing that a targeted dataset addressing specific limitations achieves superior results compared to broad image-caption corpora or benchmark-aligned data. Using offline, reward-weighted post-training with fully self-generated synthetic data, our approach enables improvements in multimodal image generation across four diverse T2I benchmarks, demonstrating the effectiveness of reward-weighting both modalities and strategically designed post-training data.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition