Score: 4

HCMA: Hierarchical Cross-model Alignment for Grounded Text-to-Image Generation

Published: May 10, 2025 | arXiv ID: 2505.06512v3

By: Hang Wang , Zhi-Qi Cheng , Chenhao Lin and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Makes AI draw pictures exactly where you say.

Business Areas:
Semantic Search Internet Services

Text-to-image synthesis has progressed to the point where models can generate visually compelling images from natural language prompts. Yet, existing methods often fail to reconcile high-level semantic fidelity with explicit spatial control, particularly in scenes involving multiple objects, nuanced relations, or complex layouts. To bridge this gap, we propose a Hierarchical Cross-Modal Alignment (HCMA) framework for grounded text-to-image generation. HCMA integrates two alignment modules into each diffusion sampling step: a global module that continuously aligns latent representations with textual descriptions to ensure scene-level coherence, and a local module that employs bounding-box layouts to anchor objects at specified locations, enabling fine-grained spatial control. Extensive experiments on the MS-COCO 2014 validation set show that HCMA surpasses state-of-the-art baselines, achieving a 0.69 improvement in Frechet Inception Distance (FID) and a 0.0295 gain in CLIP Score. These results demonstrate HCMA's effectiveness in faithfully capturing intricate textual semantics while adhering to user-defined spatial constraints, offering a robust solution for semantically grounded image generation. Our code is available at https://github.com/hwang-cs-ime/HCMA.

Country of Origin
πŸ‡­πŸ‡° πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ Hong Kong, China, United States

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition