Score: 1

ScaleDiff: Higher-Resolution Image Synthesis via Efficient and Model-Agnostic Diffusion

Published: October 29, 2025 | arXiv ID: 2510.25818v1

By: Sungho Koh , SeungJu Cha , Hyunwoo Oh and more

Potential Business Impact:

Makes AI create bigger, clearer pictures without retraining.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Text-to-image diffusion models often exhibit degraded performance when generating images beyond their training resolution. Recent training-free methods can mitigate this limitation, but they often require substantial computation or are incompatible with recent Diffusion Transformer models. In this paper, we propose ScaleDiff, a model-agnostic and highly efficient framework for extending the resolution of pretrained diffusion models without any additional training. A core component of our framework is Neighborhood Patch Attention (NPA), an efficient mechanism that reduces computational redundancy in the self-attention layer with non-overlapping patches. We integrate NPA into an SDEdit pipeline and introduce Latent Frequency Mixing (LFM) to better generate fine details. Furthermore, we apply Structure Guidance to enhance global structure during the denoising process. Experimental results demonstrate that ScaleDiff achieves state-of-the-art performance among training-free methods in terms of both image quality and inference speed on both U-Net and Diffusion Transformer architectures.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)