Score: 1

DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion

Published: October 23, 2025 | arXiv ID: 2510.20766v1

By: Noam Issachar , Guy Yariv , Sagie Benaim and more

Potential Business Impact:

Makes AI create super-big, detailed pictures easily.

Business Areas:
DSP Hardware

Diffusion Transformer models can generate images with remarkable fidelity and detail, yet training them at ultra-high resolutions remains extremely costly due to the self-attention mechanism's quadratic scaling with the number of image tokens. In this paper, we introduce Dynamic Position Extrapolation (DyPE), a novel, training-free method that enables pre-trained diffusion transformers to synthesize images at resolutions far beyond their training data, with no additional sampling cost. DyPE takes advantage of the spectral progression inherent to the diffusion process, where low-frequency structures converge early, while high-frequencies take more steps to resolve. Specifically, DyPE dynamically adjusts the model's positional encoding at each diffusion step, matching their frequency spectrum with the current stage of the generative process. This approach allows us to generate images at resolutions that exceed the training resolution dramatically, e.g., 16 million pixels using FLUX. On multiple benchmarks, DyPE consistently improves performance and achieves state-of-the-art fidelity in ultra-high-resolution image generation, with gains becoming even more pronounced at higher resolutions. Project page is available at https://noamissachar.github.io/DyPE/.

Country of Origin
🇮🇱 Israel

Page Count
25 pages

Category
Computer Science:
CV and Pattern Recognition