Audio-Visual Driven Compression for Low-Bitrate Talking Head Videos
By: Riku Takahashi, Ryugo Morita, Jinjia Zhou
Potential Business Impact:
Makes talking videos smaller, clearer, and in sync.
Talking head video compression has advanced with neural rendering and keypoint-based methods, but challenges remain, especially at low bit rates, including handling large head movements, suboptimal lip synchronization, and distorted facial reconstructions. To address these problems, we propose a novel audio-visual driven video codec that integrates compact 3D motion features and audio signals. This approach robustly models significant head rotations and aligns lip movements with speech, improving both compression efficiency and reconstruction quality. Experiments on the CelebV-HQ dataset show that our method reduces bitrate by 22% compared to VVC and by 8.5% over state-of-the-art learning-based codec. Furthermore, it provides superior lip-sync accuracy and visual fidelity at comparable bitrates, highlighting its effectiveness in bandwidth-constrained scenarios.
Similar Papers
Bidirectional Learned Facial Animation Codec for Low Bitrate Talking Head Videos
Image and Video Processing
Makes talking videos smaller without losing quality.
Audio-Visual Cross-Modal Compression for Generative Face Video Coding
Image and Video Processing
Makes video calls clearer by using sound to help video.
Exploiting Temporal Audio-Visual Correlation Embedding for Audio-Driven One-Shot Talking Head Animation
CV and Pattern Recognition
Makes talking videos match speech perfectly.