Score: 1

Investigating Training and Generalization in Faithful Self-Explanations of Large Language Models

Published: December 8, 2025 | arXiv ID: 2512.07288v1

By: Tomoki Doi, Masaru Isonuma, Hitomi Yanaka

Potential Business Impact:

Makes AI explain its thinking more honestly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models have the potential to generate explanations for their own predictions in a variety of styles based on user instructions. Recent research has examined whether these self-explanations faithfully reflect the models' actual behavior and has found that they often lack faithfulness. However, the question of how to improve faithfulness remains underexplored. Moreover, because different explanation styles have superficially distinct characteristics, it is unclear whether improvements observed in one style also arise when using other styles. This study analyzes the effects of training for faithful self-explanations and the extent to which these effects generalize, using three classification tasks and three explanation styles. We construct one-word constrained explanations that are likely to be faithful using a feature attribution method, and use these pseudo-faithful self-explanations for continual learning on instruction-tuned models. Our experiments demonstrate that training can improve self-explanation faithfulness across all classification tasks and explanation styles, and that these improvements also show signs of generalization to the multi-word settings and to unseen tasks. Furthermore, we find consistent cross-style generalization among three styles, suggesting that training may contribute to a broader improvement in faithful self-explanation ability.

Country of Origin
🇯🇵 Japan

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language