Generative AI for Video Translation: A Scalable Architecture for Multilingual Video Conferencing
By: Amirkia Rafiei Oskooei , Eren Caglar , Ibrahim Sahin and more
Potential Business Impact:
Makes video calls with live translation smooth.
The real-time deployment of cascaded generative AI pipelines for applications like video translation is constrained by significant system-level challenges. These include the cumulative latency of sequential model inference and the quadratic ($\mathcal{O}(N^2)$) computational complexity that renders multi-user video conferencing applications unscalable. This paper proposes and evaluates a practical system-level framework designed to mitigate these critical bottlenecks. The proposed architecture incorporates a turn-taking mechanism to reduce computational complexity from quadratic to linear in multi-user scenarios, and a segmented processing protocol to manage inference latency for a perceptually real-time experience. We implement a proof-of-concept pipeline and conduct a rigorous performance analysis across a multi-tiered hardware setup, including commodity (NVIDIA RTX 4060), cloud (NVIDIA T4), and enterprise (NVIDIA A100) GPUs. Our objective evaluation demonstrates that the system achieves real-time throughput ($τ< 1.0$) on modern hardware. A subjective user study further validates the approach, showing that a predictable, initial processing delay is highly acceptable to users in exchange for a smooth, uninterrupted playback experience. The work presents a validated, end-to-end system design that offers a practical roadmap for deploying scalable, real-time generative AI applications in multilingual communication platforms.
Similar Papers
Audio Driven Real-Time Facial Animation for Social Telepresence
Graphics
Makes virtual faces talk and move like real people.
Audio Driven Real-Time Facial Animation for Social Telepresence
Graphics
Makes virtual people talk and move like real ones.
LLIA -- Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models
CV and Pattern Recognition
Makes talking avatars move realistically and fast.