Score: 1

Zero-Shot Textual Explanations via Translating Decision-Critical Features

Published: December 8, 2025 | arXiv ID: 2512.07245v1

By: Toshinori Yamauchi, Hiroshi Kera, Kazuhiko Kawamoto

Potential Business Impact:

Explains why computers see what they see.

Business Areas:
Text Analytics Data and Analytics, Software

Textual explanations make image classifier decisions transparent by describing the prediction rationale in natural language. Large vision-language models can generate captions but are designed for general visual understanding, not classifier-specific reasoning. Existing zero-shot explanation methods align global image features with language, producing descriptions of what is visible rather than what drives the prediction. We propose TEXTER, which overcomes this limitation by isolating decision-critical features before alignment. TEXTER identifies the neurons contributing to the prediction and emphasizes the features encoded in those neurons -- i.e., the decision-critical features. It then maps these emphasized features into the CLIP feature space to retrieve textual explanations that reflect the model's reasoning. A sparse autoencoder further improves interpretability, particularly for Transformer architectures. Extensive experiments show that TEXTER generates more faithful and interpretable explanations than existing methods. The code will be publicly released.

Country of Origin
🇯🇵 Japan

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition