Zero-Shot Textual Explanations via Translating Decision-Critical Features
By: Toshinori Yamauchi, Hiroshi Kera, Kazuhiko Kawamoto
Potential Business Impact:
Explains why computers see what they see.
Textual explanations make image classifier decisions transparent by describing the prediction rationale in natural language. Large vision-language models can generate captions but are designed for general visual understanding, not classifier-specific reasoning. Existing zero-shot explanation methods align global image features with language, producing descriptions of what is visible rather than what drives the prediction. We propose TEXTER, which overcomes this limitation by isolating decision-critical features before alignment. TEXTER identifies the neurons contributing to the prediction and emphasizes the features encoded in those neurons -- i.e., the decision-critical features. It then maps these emphasized features into the CLIP feature space to retrieve textual explanations that reflect the model's reasoning. A sparse autoencoder further improves interpretability, particularly for Transformer architectures. Extensive experiments show that TEXTER generates more faithful and interpretable explanations than existing methods. The code will be publicly released.
Similar Papers
DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models
CV and Pattern Recognition
Explains how AI sees things without seeing real examples.
Unlocking Text Capabilities in Vision Models
CV and Pattern Recognition
Lets computers explain what pictures show.
One Patch to Caption Them All: A Unified Zero-Shot Captioning Framework
CV and Pattern Recognition
Lets computers describe any part of a picture.