Explaining What Machines See: XAI Strategies in Deep Object Detection Models
By: FatemehSadat Seyedmomeni, Mohammad Ali Keyvanrad
Potential Business Impact:
Shows how smart computers "see" to make them trustworthy.
In recent years, deep learning has achieved unprecedented success in various computer vision tasks, particularly in object detection. However, the black-box nature and high complexity of deep neural networks pose significant challenges for interpretability, especially in critical domains such as autonomous driving, medical imaging, and security systems. Explainable Artificial Intelligence (XAI) aims to address this challenge by providing tools and methods to make model decisions more transparent, interpretable, and trust-worthy for humans. This review provides a comprehensive analysis of state-of-the-art explain-ability methods specifically applied to object detection models. The paper be-gins by categorizing existing XAI techniques based on their underlying mechanisms-perturbation-based, gradient-based, backpropagation-based, and graph-based methods. Notable methods such as D-RISE, BODEM, D-CLOSE, and FSOD are discussed in detail. Furthermore, the paper investigates their applicability to various object detection architectures, including YOLO, SSD, Faster R-CNN, and EfficientDet. Statistical analysis of publication trends from 2022 to mid-2025 shows an accelerating interest in explainable object detection, indicating its increasing importance. The study also explores common datasets and evaluation metrics, and highlights the major challenges associated with model interpretability. By providing a structured taxonomy and a critical assessment of existing methods, this review aims to guide researchers and practitioners in selecting suitable explainability techniques for object detection applications and to foster the development of more interpretable AI systems.
Similar Papers
xAI-CV: An Overview of Explainable Artificial Intelligence in Computer Vision
CV and Pattern Recognition
Shows how smart computers see and decide.
Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
CV and Pattern Recognition
Finds bad AI guesses in pictures of power lines.
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
CV and Pattern Recognition
Helps AI understand why it sees objects.