SARA: Structural and Adversarial Representation Alignment for Training-efficient Diffusion Models
By: Hesen Chen , Junyan Wang , Zhiyu Tan and more
Potential Business Impact:
Makes AI create better pictures faster.
Modern diffusion models encounter a fundamental trade-off between training efficiency and generation quality. While existing representation alignment methods, such as REPA, accelerate convergence through patch-wise alignment, they often fail to capture structural relationships within visual representations and ensure global distribution consistency between pretrained encoders and denoising networks. To address these limitations, we introduce SARA, a hierarchical alignment framework that enforces multi-level representation constraints: (1) patch-wise alignment to preserve local semantic details, (2) autocorrelation matrix alignment to maintain structural consistency within representations, and (3) adversarial distribution alignment to mitigate global representation discrepancies. Unlike previous approaches, SARA explicitly models both intra-representation correlations via self-similarity matrices and inter-distribution coherence via adversarial alignment, enabling comprehensive alignment across local and global scales. Experiments on ImageNet-256 show that SARA achieves an FID of 1.36 while converging twice as fast as REPA, surpassing recent state-of-the-art image generation methods. This work establishes a systematic paradigm for optimizing diffusion training through hierarchical representation alignment.
Similar Papers
What matters for Representation Alignment: Global Information or Spatial Structure?
CV and Pattern Recognition
Makes AI pictures better by focusing on details.
No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by Themselves
CV and Pattern Recognition
Teaches computers to create better pictures faster.
Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion Models
CV and Pattern Recognition
Makes AI videos look real and consistent.