AdjustAR: AI-Driven In-Situ Adjustment of Site-Specific Augmented Reality Content
By: Nels Numan , Jessica Van Brummelen , Ziwen Lu and more
Potential Business Impact:
Keeps virtual objects in place as the real world changes.
Site-specific outdoor AR experiences are typically authored using static 3D models, but are deployed in physical environments that change over time. As a result, virtual content may become misaligned with its intended real-world referents, degrading user experience and compromising contextual interpretation. We present AdjustAR, a system that supports in-situ correction of AR content in dynamic environments using multimodal large language models (MLLMs). Given a composite image comprising the originally authored view and the current live user view from the same perspective, an MLLM detects contextual misalignments and proposes revised 2D placements for affected AR elements. These corrections are backprojected into 3D space to update the scene at runtime. By leveraging MLLMs for visual-semantic reasoning, this approach enables automated runtime corrections to maintain alignment with the authored intent as real-world target environments evolve.
Similar Papers
A Vision for AI-Driven Adaptation of Dynamic AR Content to Users and Environments
Human-Computer Interaction
Makes virtual things in games move smartly.
ImaginateAR: AI-Assisted In-Situ Authoring in Augmented Reality
Human-Computer Interaction
Creates AR scenes from your spoken ideas.
Words into World: A Task-Adaptive Agent for Language-Guided Spatial Retrieval in AR
CV and Pattern Recognition
Lets computers understand and interact with real-world objects.