Score: 0

Conveying Meaning through Gestures: An Investigation into Semantic Co-Speech Gesture Generation

Published: October 20, 2025 | arXiv ID: 2510.17599v1

By: Hendric Voss, Lisa Michelle Bohnenkamp, Stefan Kopp

Potential Business Impact:

Makes computer gestures show meaning better.

Business Areas:
Semantic Search Internet Services

This study explores two frameworks for co-speech gesture generation, AQ-GT and its semantically-augmented variant AQ-GT-a, to evaluate their ability to convey meaning through gestures and how humans perceive the resulting movements. Using sentences from the SAGA spatial communication corpus, contextually similar sentences, and novel movement-focused sentences, we conducted a user-centered evaluation of concept recognition and human-likeness. Results revealed a nuanced relationship between semantic annotations and performance. The original AQ-GT framework, lacking explicit semantic input, was surprisingly more effective at conveying concepts within its training domain. Conversely, the AQ-GT-a framework demonstrated better generalization, particularly for representing shape and size in novel contexts. While participants rated gestures from AQ-GT-a as more expressive and helpful, they did not perceive them as more human-like. These findings suggest that explicit semantic enrichment does not guarantee improved gesture generation and that its effectiveness is highly dependent on the context, indicating a potential trade-off between specialization and generalization.

Country of Origin
🇩🇪 Germany

Page Count
9 pages

Category
Computer Science:
Human-Computer Interaction