Guiding What Not to Generate: Automated Negative Prompting for Text-Image Alignment
By: Sangha Park , Eunji Kim , Yeongtak Oh and more
Potential Business Impact:
Makes AI pictures match words better.
Despite substantial progress in text-to-image generation, achieving precise text-image alignment remains challenging, particularly for prompts with rich compositional structure or imaginative elements. To address this, we introduce Negative Prompting for Image Correction (NPC), an automated pipeline that improves alignment by identifying and applying negative prompts that suppress unintended content. We begin by analyzing cross-attention patterns to explain why both targeted negatives-those directly tied to the prompt's alignment error-and untargeted negatives-tokens unrelated to the prompt but present in the generated image-can enhance alignment. To discover useful negatives, NPC generates candidate prompts using a verifier-captioner-proposer framework and ranks them with a salient text-space score, enabling effective selection without requiring additional image synthesis. On GenEval++ and Imagine-Bench, NPC outperforms strong baselines, achieving 0.571 vs. 0.371 on GenEval++ and the best overall performance on Imagine-Bench. By guiding what not to generate, NPC provides a principled, fully automated route to stronger text-image alignment in diffusion models. Code is released at https://github.com/wiarae/NPC.
Similar Papers
Negative Entity Suppression for Zero-Shot Captioning with Synthetic Images
CV and Pattern Recognition
Stops AI from making up fake things in pictures.
Prompt-Based Safety Guidance Is Ineffective for Unlearned Text-to-Image Diffusion Models
Machine Learning (CS)
Makes AI image makers safer from bad prompts.
Dynamic VLM-Guided Negative Prompting for Diffusion Models
CV and Pattern Recognition
Makes AI art look better by guiding it away from bad ideas.