Score: 1

Mitigating Diffusion Model Hallucinations with Dynamic Guidance

Published: October 6, 2025 | arXiv ID: 2510.05356v1

By: Kostas Triaridis , Alexandros Graikos , Aggelina Chatziagapi and more

Potential Business Impact:

Makes AI pictures look real, not fake.

Business Areas:
Guides Media and Entertainment

Diffusion models, despite their impressive demos, often produce hallucinatory samples with structural inconsistencies that lie outside of the support of the true data distribution. Such hallucinations can be attributed to excessive smoothing between modes of the data distribution. However, semantic interpolations are often desirable and can lead to generation diversity, thus we believe a more nuanced solution is required. In this work, we introduce Dynamic Guidance, which tackles this issue. Dynamic Guidance mitigates hallucinations by selectively sharpening the score function only along the pre-determined directions known to cause artifacts, while preserving valid semantic variations. To our knowledge, this is the first approach that addresses hallucinations at generation time rather than through post-hoc filtering. Dynamic Guidance substantially reduces hallucinations on both controlled and natural image datasets, significantly outperforming baselines.

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
CV and Pattern Recognition