Interaction Analysis by Humans and AI: A Comparative Perspective
By: Maryam Teimouri, Filip Ginter, Tomi "bgt" Suovuo
Potential Business Impact:
Makes learning games more fun for kids.
This paper explores how Mixed Reality (MR) and 2D video conferencing influence children's communication during a gesture-based guessing game. Finnish-speaking participants engaged in a short collaborative task using two different setups: Microsoft HoloLens MR and Zoom. Audio-video recordings were transcribed and analyzed using Large Language Models (LLMs), enabling iterative correction, translation, and annotation. Despite limitations in annotations' accuracy and agreement, automated approaches significantly reduced processing time and allowed non-Finnish-speaking researchers to participate in data analysis. Evaluations highlight both the efficiency and constraints of LLM-based analyses for capturing children's interactions across these platforms. Initial findings indicate that MR fosters richer interaction, evidenced by higher emotional expression during annotation, and heightened engagement, while Zoom offers simplicity and accessibility. This study underscores the potential of MR to enhance collaborative learning experiences for children in distributed settings.
Similar Papers
Multimodal "Puppeteer": An Exploration of Robot Teleoperation Via Virtual Counterpart with LLM-Driven Voice and Gesture Interaction in Augmented Reality
Human-Computer Interaction
Control robots with your voice and hands.
Applying LLM-Powered Virtual Humans to Child Interviews in Child-Centered Design
Human-Computer Interaction
Lets computers talk to kids to learn their ideas.
Teaching LLMs to See and Guide: Context-Aware Real-Time Assistance in Augmented Reality
Human-Computer Interaction
Helps AR/VR assistants understand what you're doing.