AV-Dialog: Spoken Dialogue Models with Audio-Visual Input
By: Tuochao Chen , Bandhav Veluri , Hongyu Gong and more
Potential Business Impact:
Lets computers understand conversations with many people.
Dialogue models falter in noisy, multi-speaker environments, often producing irrelevant responses and awkward turn-taking. We present AV-Dialog, the first multimodal dialog framework that uses both audio and visual cues to track the target speaker, predict turn-taking, and generate coherent responses. By combining acoustic tokenization with multi-task, multi-stage training on monadic, synthetic, and real audio-visual dialogue datasets, AV-Dialog achieves robust streaming transcription, semantically grounded turn-boundary detection and accurate responses, resulting in a natural conversational flow. Experiments show that AV-Dialog outperforms audio-only models under interference, reducing transcription errors, improving turn-taking prediction, and enhancing human-rated dialogue quality. These results highlight the power of seeing as well as hearing for speaker-aware interaction, paving the way for {spoken} dialogue agents that perform {robustly} in real-world, noisy environments.
Similar Papers
MAViD: A Multimodal Framework for Audio-Visual Dialogue Understanding and Generation
CV and Pattern Recognition
Creates talking, moving characters from text.
Seeing is Believing: Emotion-Aware Audio-Visual Language Modeling for Expressive Speech Generation
Computation and Language
Makes computer voices sound more real.
Seeing is Believing: Emotion-Aware Audio-Visual Language Modeling for Expressive Speech Generation
Computation and Language
Makes computers talk with real-life facial expressions.