Conveying Meaning through Gestures: An Investigation into Semantic Co-Speech Gesture Generation
By: Hendric Voss, Lisa Michelle Bohnenkamp, Stefan Kopp
Potential Business Impact:
Makes computer gestures show meaning better.
This study explores two frameworks for co-speech gesture generation, AQ-GT and its semantically-augmented variant AQ-GT-a, to evaluate their ability to convey meaning through gestures and how humans perceive the resulting movements. Using sentences from the SAGA spatial communication corpus, contextually similar sentences, and novel movement-focused sentences, we conducted a user-centered evaluation of concept recognition and human-likeness. Results revealed a nuanced relationship between semantic annotations and performance. The original AQ-GT framework, lacking explicit semantic input, was surprisingly more effective at conveying concepts within its training domain. Conversely, the AQ-GT-a framework demonstrated better generalization, particularly for representing shape and size in novel contexts. While participants rated gestures from AQ-GT-a as more expressive and helpful, they did not perceive them as more human-like. These findings suggest that explicit semantic enrichment does not guarantee improved gesture generation and that its effectiveness is highly dependent on the context, indicating a potential trade-off between specialization and generalization.
Similar Papers
ImaGGen: Zero-Shot Generation of Co-Speech Semantic Gestures Grounded in Language and Image Input
Human-Computer Interaction
Makes computer avatars gesture meaning with speech.
SIG-Chat: Spatial Intent-Guided Conversational Gesture Generation Involving How, When and Where
Graphics
Robots can now point and talk like people.
SIG-Chat: Spatial Intent-Guided Conversational Gesture Generation Involving How, When and Where
Graphics
Robots can now point and gesture like people.