TalkCuts: A Large-Scale Dataset for Multi-Shot Human Speech Video Generation
By: Jiaben Chen , Zixin Wang , Ailing Zeng and more
Potential Business Impact:
Makes videos of people talking with different camera angles.
In this work, we present TalkCuts, a large-scale dataset designed to facilitate the study of multi-shot human speech video generation. Unlike existing datasets that focus on single-shot, static viewpoints, TalkCuts offers 164k clips totaling over 500 hours of high-quality human speech videos with diverse camera shots, including close-up, half-body, and full-body views. The dataset includes detailed textual descriptions, 2D keypoints and 3D SMPL-X motion annotations, covering over 10k identities, enabling multimodal learning and evaluation. As a first attempt to showcase the value of the dataset, we present Orator, an LLM-guided multi-modal generation framework as a simple baseline, where the language model functions as a multi-faceted director, orchestrating detailed specifications for camera transitions, speaker gesticulations, and vocal modulation. This architecture enables the synthesis of coherent long-form videos through our integrated multi-modal video generation module. Extensive experiments in both pose-guided and audio-driven settings show that training on TalkCuts significantly enhances the cinematographic coherence and visual appeal of generated multi-shot speech videos. We believe TalkCuts provides a strong foundation for future work in controllable, multi-shot speech video generation and broader multimodal learning.
Similar Papers
TalkVid: A Large-Scale Diversified Dataset for Audio-Driven Talking Head Synthesis
CV and Pattern Recognition
Makes talking videos look real for everyone.
Multi-human Interactive Talking Dataset
CV and Pattern Recognition
Makes videos of many people talking together.
TalkVerse: Democratizing Minute-Long Audio-Driven Video Generation
CV and Pattern Recognition
Makes videos of people talking from sound.