Score: 1

CAuSE: Decoding Multimodal Classifiers using Faithful Natural Language Explanation

Published: December 7, 2025 | arXiv ID: 2512.06814v1

By: Dibyanayan Bandyopadhyay , Soham Bhattacharjee , Mohammed Hasanuzzaman and more

Potential Business Impact:

Explains how AI makes decisions in plain words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multimodal classifiers function as opaque black box models. While several techniques exist to interpret their predictions, very few of them are as intuitive and accessible as natural language explanations (NLEs). To build trust, such explanations must faithfully capture the classifier's internal decision making behavior, a property known as faithfulness. In this paper, we propose CAuSE (Causal Abstraction under Simulated Explanations), a novel framework to generate faithful NLEs for any pretrained multimodal classifier. We demonstrate that CAuSE generalizes across datasets and models through extensive empirical evaluations. Theoretically, we show that CAuSE, trained via interchange intervention, forms a causal abstraction of the underlying classifier. We further validate this through a redesigned metric for measuring causal faithfulness in multimodal settings. CAuSE surpasses other methods on this metric, with qualitative analysis reinforcing its advantages. We perform detailed error analysis to pinpoint the failure cases of CAuSE. For replicability, we make the codes available at https://github.com/newcodevelop/CAuSE

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Computation and Language