L-STEC: Learned Video Compression with Long-term Spatio-Temporal Enhanced Context
By: Tiange Zhang , Zhimeng Huang , Xiandong Meng and more
Potential Business Impact:
Makes videos smaller by remembering more.
Neural Video Compression has emerged in recent years, with condition-based frameworks outperforming traditional codecs. However, most existing methods rely solely on the previous frame's features to predict temporal context, leading to two critical issues. First, the short reference window misses long-term dependencies and fine texture details. Second, propagating only feature-level information accumulates errors over frames, causing prediction inaccuracies and loss of subtle textures. To address these, we propose the Long-term Spatio-Temporal Enhanced Context (L-STEC) method. We first extend the reference chain with LSTM to capture long-term dependencies. We then incorporate warped spatial context from the pixel domain, fusing spatio-temporal information through a multi-receptive field network to better preserve reference details. Experimental results show that L-STEC significantly improves compression by enriching contextual information, achieving 37.01% bitrate savings in PSNR and 31.65% in MS-SSIM compared to DCVC-TCM, outperforming both VTM-17.0 and DCVC-FM and establishing new state-of-the-art performance.
Similar Papers
Accelerating Streaming Video Large Language Models via Hierarchical Token Compression
CV and Pattern Recognition
Makes videos play faster without losing quality.
Augmented Deep Contexts for Spatially Embedded Video Coding
Image and Video Processing
Makes videos look better with less data.
Single-step Diffusion-based Video Coding with Semantic-Temporal Guidance
CV and Pattern Recognition
Makes videos look good even with less data.