Score: 1

Generative Preprocessing for Image Compression with Pre-trained Diffusion Models

Published: December 17, 2025 | arXiv ID: 2512.15270v1

By: Mengxi Guo , Shijie Zhao , Junlin Li and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Makes pictures look better when they're made smaller.

Business Areas:
Image Recognition Data and Analytics, Software

Preprocessing is a well-established technique for optimizing compression, yet existing methods are predominantly Rate-Distortion (R-D) optimized and constrained by pixel-level fidelity. This work pioneers a shift towards Rate-Perception (R-P) optimization by, for the first time, adapting a large-scale pre-trained diffusion model for compression preprocessing. We propose a two-stage framework: first, we distill the multi-step Stable Diffusion 2.1 into a compact, one-step image-to-image model using Consistent Score Identity Distillation (CiD). Second, we perform a parameter-efficient fine-tuning of the distilled model's attention modules, guided by a Rate-Perception loss and a differentiable codec surrogate. Our method seamlessly integrates with standard codecs without any modification and leverages the model's powerful generative priors to enhance texture and mitigate artifacts. Experiments show substantial R-P gains, achieving up to a 30.13% BD-rate reduction in DISTS on the Kodak dataset and delivering superior subjective visual quality.

Country of Origin
🇨🇳 China

Page Count
10 pages

Category
Electrical Engineering and Systems Science:
Image and Video Processing