Score: 1

Training Feature Attribution for Vision Models

Published: October 10, 2025 | arXiv ID: 2510.09135v1

By: Aziz Bacha, Thomas George

Potential Business Impact:

Shows how bad training pictures trick computers.

Business Areas:
Image Recognition Data and Analytics, Software

Deep neural networks are often considered opaque systems, prompting the need for explainability methods to improve trust and accountability. Existing approaches typically attribute test-time predictions either to input features (e.g., pixels in an image) or to influential training examples. We argue that both perspectives should be studied jointly. This work explores *training feature attribution*, which links test predictions to specific regions of specific training images and thereby provides new insights into the inner workings of deep models. Our experiments on vision datasets show that training feature attribution yields fine-grained, test-specific explanations: it identifies harmful examples that drive misclassifications and reveals spurious correlations, such as patch-based shortcuts, that conventional attribution methods fail to expose.

Page Count
19 pages

Category
Computer Science:
CV and Pattern Recognition