End-to-End Learning-based Video Streaming Enhancement Pipeline: A Generative AI Approach
By: Emanuele Artioli, Farzad Tashtarian, Christian Timmerer
The primary challenge of video streaming is to balance high video quality with smooth playback. Traditional codecs are well tuned for this trade-off, yet their inability to use context means they must encode the entire video data and transmit it to the client. This paper introduces ELVIS (End-to-end Learning-based VIdeo Streaming Enhancement Pipeline), an end-to-end architecture that combines server-side encoding optimizations with client-side generative in-painting to remove and reconstruct redundant video data. Its modular design allows ELVIS to integrate different codecs, inpainting models, and quality metrics, making it adaptable to future innovations. Our results show that current technologies achieve improvements of up to 11 VMAF points over baseline benchmarks, though challenges remain for real-time applications due to computational demands. ELVIS represents a foundational step toward incorporating generative AI into video streaming pipelines, enabling higher quality experiences without increased bandwidth requirements.
Similar Papers
ELVIS: Enhance Low-Light for Video Instance Segmentation in the Dark
CV and Pattern Recognition
Makes dark videos clear for computers to understand.
Content Adaptive Encoding For Interactive Game Streaming
Image and Video Processing
Makes game streaming look better with less delay.
Breaking the Encoder Barrier for Seamless Video-Language Understanding
CV and Pattern Recognition
Makes videos understandable by computers much faster.