TextGuider: Training-Free Guidance for Text Rendering via Attention Alignment
By: Kanghyun Baek , Sangyub Lee , Jin Young Choi and more
Despite recent advances, diffusion-based text-to-image models still struggle with accurate text rendering. Several studies have proposed fine-tuning or training-free refinement methods for accurate text rendering. However, the critical issue of text omission, where the desired text is partially or entirely missing, remains largely overlooked. In this work, we propose TextGuider, a novel training-free method that encourages accurate and complete text appearance by aligning textual content tokens and text regions in the image. Specifically, we analyze attention patterns in MM-DiT models, particularly for text-related tokens intended to be rendered in the image. Leveraging this observation, we apply latent guidance during the early stage of denoising steps based on two loss functions that we introduce. Our method achieves state-of-the-art performance in test-time text rendering, with significant gains in recall and strong results in OCR accuracy and CLIP score.
Similar Papers
Steering Guidance for Personalized Text-to-Image Diffusion Models
CV and Pattern Recognition
Creates personalized images that match descriptions perfectly.
AlignVid: Training-Free Attention Scaling for Semantic Fidelity in Text-Guided Image-to-Video Generation
CV and Pattern Recognition
Makes videos match text descriptions better.
TITAN-Guide: Taming Inference-Time AligNment for Guided Text-to-Video Diffusion Models
CV and Pattern Recognition
Makes videos from text using less memory