CLIO: A Tour Guide Robot with Co-speech Actions for Visual Attention Guidance and Enhanced User Engagement
By: Yuxuan Chen , Ian Leong Ting Lo , Bao Guo and more
While audio guides can offer rich information about an exhibit, it is challenging for visitors to focus on specific exhibit details based only on the verbal description. We present \textit{CLIO}, a tour guide robot with co-speech actions to direct visitors' visual attention and thus enhance the overall user engagement in a guided tour. \textit{CLIO} is equipped with designed actions to engage visitors. It builds eye contact with the visitor through tracking a visitor's face and blinking its eyes, or orient their attention by its head movement and laser pointer. We further use a Large Language Model (LLM) to coordinate the designed actions with a given narrative script for exhibition. We conducted a user study to evaluate the \textit{CLIO} system in a mock-up exhibition of historical photographs. We collected feedback from questionnaires and quantitative data from a mobile eye tracker. Experimental results validated that the engaging actions are well designed and demonstrated its efficacy in guiding visual attention of the visitors. It was evidenced that \textit{CLIO} achieved an enhanced engagement compared to the baseline system with only audio guidance.
Similar Papers
Eye Care You: Voice Guidance Application Using Social Robot for Visually Impaired People
Human-Computer Interaction
Robot helps blind people with daily tasks.
Examining the legibility of humanoid robot arm movements in a pointing task
Robotics
Helps robots show where they're going.
Would you let a humanoid play storytelling with your child? A usability study on LLM-powered narrative Human-Robot Interaction
Robotics
Robot learns to tell stories with people.