xAI-CV: An Overview of Explainable Artificial Intelligence in Computer Vision
By: Nguyen Van Tu, Pham Nguyen Hai Long, Vo Hoai Viet
Potential Business Impact:
Shows how smart computers see and decide.
Deep learning has become the de facto standard and dominant paradigm in image analysis tasks, achieving state-of-the-art performance. However, this approach often results in "black-box" models, whose decision-making processes are difficult to interpret, raising concerns about reliability in critical applications. To address this challenge and provide human a method to understand how AI model process and make decision, the field of xAI has emerged. This paper surveys four representative approaches in xAI for visual perception tasks: (i) Saliency Maps, (ii) Concept Bottleneck Models (CBM), (iii) Prototype-based methods, and (iv) Hybrid approaches. We analyze their underlying mechanisms, strengths and limitations, as well as evaluation metrics, thereby providing a comprehensive overview to guide future research and applications.
Similar Papers
Explaining What Machines See: XAI Strategies in Deep Object Detection Models
CV and Pattern Recognition
Shows how smart computers "see" to make them trustworthy.
A Novel Framework for Automated Explain Vision Model Using Vision-Language Models
CV and Pattern Recognition
Shows how computer "eyes" make mistakes.
Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
CV and Pattern Recognition
Finds bad AI guesses in pictures of power lines.