TalkVid: A Large-Scale Diversified Dataset for Audio-Driven Talking Head Synthesis
By: Shunian Chen , Hejin Huang , Yexin Liu and more
Potential Business Impact:
Makes talking videos look real for everyone.
Audio-driven talking head synthesis has achieved remarkable photorealism, yet state-of-the-art (SOTA) models exhibit a critical failure: they lack generalization to the full spectrum of human diversity in ethnicity, language, and age groups. We argue that this generalization gap is a direct symptom of limitations in existing training data, which lack the necessary scale, quality, and diversity. To address this challenge, we introduce TalkVid, a new large-scale, high-quality, and diverse dataset containing 1244 hours of video from 7729 unique speakers. TalkVid is curated through a principled, multi-stage automated pipeline that rigorously filters for motion stability, aesthetic quality, and facial detail, and is validated against human judgments to ensure its reliability. Furthermore, we construct and release TalkVid-Bench, a stratified evaluation set of 500 clips meticulously balanced across key demographic and linguistic axes. Our experiments demonstrate that a model trained on TalkVid outperforms counterparts trained on previous datasets, exhibiting superior cross-dataset generalization. Crucially, our analysis on TalkVid-Bench reveals performance disparities across subgroups that are obscured by traditional aggregate metrics, underscoring its necessity for future research. Code and data can be found in https://github.com/FreedomIntelligence/TalkVid
Similar Papers
TalkCuts: A Large-Scale Dataset for Multi-Shot Human Speech Video Generation
CV and Pattern Recognition
Makes videos of people talking with different camera angles.
TalkVerse: Democratizing Minute-Long Audio-Driven Video Generation
CV and Pattern Recognition
Makes videos of people talking from sound.
Multi-human Interactive Talking Dataset
CV and Pattern Recognition
Makes videos of many people talking together.