Words into World: A Task-Adaptive Agent for Language-Guided Spatial Retrieval in AR
By: Lixing Guo, Tobias Höllerer
Potential Business Impact:
Lets computers understand and interact with real-world objects.
Traditional augmented reality (AR) systems predominantly rely on fixed class detectors or fiducial markers, limiting their ability to interpret complex, open-vocabulary natural language queries. We present a modular AR agent system that integrates multimodal large language models (MLLMs) with grounded vision models to enable relational reasoning in space and language-conditioned spatial retrieval in physical environments. Our adaptive task agent coordinates MLLMs and coordinate-aware perception tools to address varying query complexities, ranging from simple object identification to multi-object relational reasoning, while returning meter-accurate 3D anchors. It constructs dynamic AR scene graphs encoding nine typed relations (spatial, structural-semantic, causal-functional), enabling MLLMs to understand not just what objects exist, but how they relate and interact in 3D space. Through task-adaptive region-of-interest highlighting and contextual spatial retrieval, the system guides human attention to information-dense areas while supporting human-in-the-loop refinement. The agent dynamically invokes coordinate-aware tools for complex queries-selection, measurement, comparison, and actuation-grounding language understanding in physical operations. The modular architecture supports plug-and-use vision-language models without retraining, establishing AR agents as intermediaries that augment MLLMs with real-world spatial intelligence for interactive scene understanding. We also introduce GroundedAR-Bench, an evaluation framework for language-driven real world localization and relation grounding across diverse environments.
Similar Papers
Designing Memory-Augmented AR Agents for Spatiotemporal Reasoning in Personalized Task Assistance
Artificial Intelligence
Helps smart glasses remember your past to help you.
Teaching LLMs to See and Guide: Context-Aware Real-Time Assistance in Augmented Reality
Human-Computer Interaction
Helps AR/VR assistants understand what you're doing.
Weakly-supervised Latent Models for Task-specific Visual-Language Control
Artificial Intelligence
Helps robots see and move objects precisely.