A Neurosymbolic Framework for Interpretable Cognitive Attack Detection in Augmented Reality
By: Rongqian Chen , Allison Andreyev , Yanming Xiu and more
Potential Business Impact:
Stops fake things in your AR from tricking you.
Augmented Reality (AR) enriches perception by overlaying virtual elements on the physical world. Due to its growing popularity, cognitive attacks that alter AR content to manipulate users' semantic perception have received increasing attention. Existing detection methods often focus on visual changes, which are restricted to pixel- or image-level processing and lack semantic reasoning capabilities, or they rely on pre-trained vision-language models (VLMs), which function as black-box approaches with limited interpretability. In this paper, we present CADAR, a novel neurosymbolic approach for cognitive attack detection in AR. It fuses multimodal vision-language inputs using neural VLMs to obtain a symbolic perception-graph representation, incorporating prior knowledge, salience weighting, and temporal correlations. The model then enables particle-filter based statistical reasoning -- a sequential Monte Carlo method -- to detect cognitive attacks. Thus, CADAR inherits the adaptability of pre-trained VLM and the interpretability and reasoning rigor of particle filtering. Experiments on an extended AR cognitive attack dataset show accuracy improvements of up to 10.7% over strong baselines on challenging AR attack scenarios, underscoring the promise of neurosymbolic methods for effective and interpretable cognitive attack detection.
Similar Papers
Toward Safe, Trustworthy and Realistic Augmented Reality User Experience
CV and Pattern Recognition
Keeps augmented reality safe from bad virtual things.
Perception Graph for Cognitive Attack Reasoning in Augmented Reality
Artificial Intelligence
Protects soldiers from fake AR sights.
AR as an Evaluation Playground: Bridging Metrics and Visual Perception of Computer Vision Models
CV and Pattern Recognition
Lets people test computer vision with games.