ChatAR: Conversation Support using Large Language Model and Augmented Reality
By: Yuichiro Fujimoto
Potential Business Impact:
Helps you talk better by showing hidden info.
Engaging in smooth conversations with others is a crucial social skill. However, differences in knowledge between conversation participants can sometimes hinder effective communication. To tackle this issue, this study proposes a real-time support system that integrates head-mounted display (HMD)-based augmented reality (AR) technology with large language models (LLMs). This system facilitates conversation by recognizing keywords during dialogue, generating relevant information using the LLM, reformatting it, and presenting it to the user via the HMD. A significant issue with this system is that the user's eye movements may reveal to the conversation partner that they are reading the displayed text. This study also proposes a method for presenting information that takes into account appropriate eye movements during conversation. Two experiments were conducted to evaluate the effectiveness of the proposed system. The first experiment revealed that the proposed information presentation method reduces the likelihood of the conversation partner noticing that the user is reading the displayed text. The second experiment demonstrated that the proposed method led to a more balanced speech ratio between the user and the conversation partner, as well as a increase in the perceived excitement of the conversation.
Similar Papers
ARbiter: Generating Dialogue Options and Communication Support in Augmented Reality
Human-Computer Interaction
Lets glasses help you talk better.
Teaching LLMs to See and Guide: Context-Aware Real-Time Assistance in Augmented Reality
Human-Computer Interaction
Helps AR/VR assistants understand what you're doing.
Teaching LLMs to See and Guide: Context-Aware Real-Time Assistance in Augmented Reality
Human-Computer Interaction
Helps AR/VR assistants understand what you're doing.