A Novel Framework for Automated Explain Vision Model Using Vision-Language Models
By: Phu-Vinh Nguyen , Tan-Hanh Pham , Chris Ngo and more
Potential Business Impact:
Shows how computer "eyes" make mistakes.
The development of many vision models mainly focuses on improving their performance using metrics such as accuracy, IoU, and mAP, with less attention to explainability due to the complexity of applying xAI methods to provide a meaningful explanation of trained models. Although many existing xAI methods aim to explain vision models sample-by-sample, methods explaining the general behavior of vision models, which can only be captured after running on a large dataset, are still underexplored. Furthermore, understanding the behavior of vision models on general images can be very important to prevent biased judgments and help identify the model's trends and patterns. With the application of Vision-Language Models, this paper proposes a pipeline to explain vision models at both the sample and dataset levels. The proposed pipeline can be used to discover failure cases and gain insights into vision models with minimal effort, thereby integrating vision model development with xAI analysis to advance image analysis.
Similar Papers
xAI-CV: An Overview of Explainable Artificial Intelligence in Computer Vision
CV and Pattern Recognition
Shows how smart computers see and decide.
Large Language Models Facilitate Vision Reflection in Image Classification
CV and Pattern Recognition
Helps AI understand pictures by using words.
Decoding the Multimodal Maze: A Systematic Review on the Adoption of Explainability in Multimodal Attention-based Models
Machine Learning (CS)
Helps understand how AI uses different information.