Augmented Deep Contexts for Spatially Embedded Video Coding
By: Yifan Bian , Chuanbo Tang , Li Li and more
Potential Business Impact:
Makes videos look better with less data.
Most Neural Video Codecs (NVCs) only employ temporal references to generate temporal-only contexts and latent prior. These temporal-only NVCs fail to handle large motions or emerging objects due to limited contexts and misaligned latent prior. To relieve the limitations, we propose a Spatially Embedded Video Codec (SEVC), in which the low-resolution video is compressed for spatial references. Firstly, our SEVC leverages both spatial and temporal references to generate augmented motion vectors and hybrid spatial-temporal contexts. Secondly, to address the misalignment issue in latent prior and enrich the prior information, we introduce a spatial-guided latent prior augmented by multiple temporal latent representations. At last, we design a joint spatial-temporal optimization to learn quality-adaptive bit allocation for spatial references, further boosting rate-distortion performance. Experimental results show that our SEVC effectively alleviates the limitations in handling large motions or emerging objects, and also reduces 11.9% more bitrate than the previous state-of-the-art NVC while providing an additional low-resolution bitstream. Our code and model are available at https://github.com/EsakaK/SEVC.
Similar Papers
L-STEC: Learned Video Compression with Long-term Spatio-Temporal Enhanced Context
CV and Pattern Recognition
Makes videos smaller by remembering more.
BiECVC: Gated Diversification of Bidirectional Contexts for Learned Video Compression
Image and Video Processing
Makes videos smaller for faster sending.
Single-step Diffusion-based Video Coding with Semantic-Temporal Guidance
CV and Pattern Recognition
Makes videos look good even with less data.