A Tri-Dynamic Preprocessing Framework for UGC Video Compression
By: Fei Zhao , Mengxi Guo , Shijie Zhao and more
In recent years, user generated content (UGC) has become the dominant force in internet traffic. However, UGC videos exhibit a higher degree of variability and diverse characteristics compared to traditional encoding test videos. This variance challenges the effectiveness of data-driven machine learning algorithms for optimizing encoding in the broader context of UGC scenarios. To address this issue, we propose a Tri-Dynamic Preprocessing framework for UGC. Firstly, we employ an adaptive factor to regulate preprocessing intensity. Secondly, an adaptive quantization level is employed to fine-tune the codec simulator. Thirdly, we utilize an adaptive lambda tradeoff to adjust the rate-distortion loss function. Experimental results on large-scale test sets demonstrate that our method attains exceptional performance.
Similar Papers
A Preprocessing Framework for Video Machine Vision under Compression
Multimedia
Makes videos smaller for computers to understand.
Avoiding Quality Saturation in UGC Compression Using Denoised References
Image and Video Processing
Makes videos look better without wasting space.
Generative Neural Video Compression via Video Diffusion Prior
CV and Pattern Recognition
Makes videos look clearer and smoother when compressed.