Score: 0

Modeling Turn-Taking with Semantically Informed Gestures

Published: October 22, 2025 | arXiv ID: 2510.19350v1

By: Varsha Suresh , M. Hamza Mughal , Christian Theobalt and more

Potential Business Impact:

Helps computers know when to speak in talks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In conversation, humans use multimodal cues, such as speech, gestures, and gaze, to manage turn-taking. While linguistic and acoustic features are informative, gestures provide complementary cues for modeling these transitions. To study this, we introduce DnD Gesture++, an extension of the multi-party DnD Gesture corpus enriched with 2,663 semantic gesture annotations spanning iconic, metaphoric, deictic, and discourse types. Using this dataset, we model turn-taking prediction through a Mixture-of-Experts framework integrating text, audio, and gestures. Experiments show that incorporating semantically guided gestures yields consistent performance gains over baselines, demonstrating their complementary role in multimodal turn-taking.

Country of Origin
🇩🇪 Germany

Page Count
7 pages

Category
Computer Science:
Computation and Language