Score: 1

DiffVC-OSD: One-Step Diffusion-based Perceptual Neural Video Compression Framework

Published: August 11, 2025 | arXiv ID: 2508.07682v1

By: Wenzhuo Ma, Zhenzhong Chen

Potential Business Impact:

Makes videos look better, faster, and smaller.

In this work, we first propose DiffVC-OSD, a One-Step Diffusion-based Perceptual Neural Video Compression framework. Unlike conventional multi-step diffusion-based methods, DiffVC-OSD feeds the reconstructed latent representation directly into a One-Step Diffusion Model, enhancing perceptual quality through a single diffusion step guided by both temporal context and the latent itself. To better leverage temporal dependencies, we design a Temporal Context Adapter that encodes conditional inputs into multi-level features, offering more fine-grained guidance for the Denoising Unet. Additionally, we employ an End-to-End Finetuning strategy to improve overall compression performance. Extensive experiments demonstrate that DiffVC-OSD achieves state-of-the-art perceptual compression performance, offers about 20$\times$ faster decoding and a 86.92\% bitrate reduction compared to the corresponding multi-step diffusion-based variant.

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Image and Video Processing