Score: 0

Teaching LLMs to See and Guide: Context-Aware Real-Time Assistance in Augmented Reality

Published: November 1, 2025 | arXiv ID: 2511.00730v2

By: Mahya Qorbani, Kamran Paynabar, Mohsen Moghaddam

Potential Business Impact:

Helps AR/VR assistants understand what you're doing.

Business Areas:
Augmented Reality Hardware, Software

The growing adoption of augmented and virtual reality (AR and VR) technologies in industrial training and on-the-job assistance has created new opportunities for intelligent, context-aware support systems. As workers perform complex tasks guided by AR and VR, these devices capture rich streams of multimodal data, including gaze, hand actions, and task progression, that can reveal user intent and task state in real time. Leveraging this information effectively remains a major challenge. In this work, we present a context-aware large language model (LLM) assistant that integrates diverse data modalities, such as hand actions, task steps, and dialogue history, into a unified framework for real-time question answering. To systematically study how context influences performance, we introduce an incremental prompting framework, where each model version receives progressively richer contextual inputs. Using the HoloAssist dataset, which records AR-guided task executions, we evaluate how each modality contributes to the assistant's effectiveness. Our experiments show that incorporating multimodal context significantly improves the accuracy and relevance of responses. These findings highlight the potential of LLM-driven multimodal integration to enable adaptive, intuitive assistance for AR and VR-based industrial training and assistance.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Human-Computer Interaction