Neural Compression of 360-Degree Equirectangular Videos using Quality Parameter Adaptation
By: Daichi Arai , Yuichi Kondo , Kyohei Unno and more
This study proposes a practical approach for compressing 360-degree equirectangular videos using pretrained neural video compression (NVC) models. Without requiring additional training or changes in the model architectures, the proposed method extends quantization parameter adaptation techniques from traditional video codecs to NVC, utilizing the spatially varying sampling density in equirectangular projections. We introduce latitude-based adaptive quality parameters through rate-distortion optimization for NVC. The proposed method utilizes vector bank interpolation for latent modulation, enabling flexible adaptation with arbitrary quality parameters and mitigating the limitations caused by rounding errors in the adaptive quantization parameters. Experimental results demonstrate that applying this method to the DCVC-RT framework yields BD-Rate savings of 5.2% in terms of the weighted spherical peak signal-to-noise ratio for JVET class S1 test sequences, with only a 0.3% increase in processing time.
Similar Papers
Boosting Neural Video Representation via Online Structural Reparameterization
Image and Video Processing
Makes videos smaller for faster sending.
An Efficient Adaptive Compression Method for Human Perception and Machine Vision Tasks
CV and Pattern Recognition
Makes AI see details in pictures for tasks.
Enhancing Quality for VVC Compressed Videos with Omniscient Quality Enhancement Model
CV and Pattern Recognition
Makes videos look better with less data.