Score: 0

A Novel Framework for Automated Explain Vision Model Using Vision-Language Models

Published: August 27, 2025 | arXiv ID: 2508.20227v1

By: Phu-Vinh Nguyen , Tan-Hanh Pham , Chris Ngo and more

Potential Business Impact:

Shows how computer "eyes" make mistakes.

Business Areas:
Image Recognition Data and Analytics, Software

The development of many vision models mainly focuses on improving their performance using metrics such as accuracy, IoU, and mAP, with less attention to explainability due to the complexity of applying xAI methods to provide a meaningful explanation of trained models. Although many existing xAI methods aim to explain vision models sample-by-sample, methods explaining the general behavior of vision models, which can only be captured after running on a large dataset, are still underexplored. Furthermore, understanding the behavior of vision models on general images can be very important to prevent biased judgments and help identify the model's trends and patterns. With the application of Vision-Language Models, this paper proposes a pipeline to explain vision models at both the sample and dataset levels. The proposed pipeline can be used to discover failure cases and gain insights into vision models with minimal effort, thereby integrating vision model development with xAI analysis to advance image analysis.

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition