Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation
By: Flavio Giorgi , Matteo Silvestri , Cesare Campagnano and more
Potential Business Impact:
Makes AI decisions easy to understand with stories.
Explainable Artificial Intelligence has become a crucial area of research, aiming to demystify the decision-making processes of deep learning models. Among various explainability techniques, counterfactual explanations have been proven particularly promising, as they offer insights into model behavior by highlighting minimal changes that would alter a prediction. Despite their potential, these explanations are often complex and technical, making them difficult for non-experts to interpret. To address this challenge, we propose a novel pipeline that leverages Language Models, large and small, to compose narratives for counterfactual explanations. We employ knowledge distillation techniques along with a refining mechanism to enable Small Language Models to perform comparably to their larger counterparts while maintaining robust reasoning abilities. In addition, we introduce a simple but effective evaluation method to assess natural language narratives, designed to verify whether the models' responses are in line with the factual, counterfactual ground truth. As a result, our proposed pipeline enhances both the reasoning capabilities and practical performance of student models, making them more suitable for real-world use cases.
Similar Papers
From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments
Artificial Intelligence
Helps smart homes explain why things happened.
Actionable and diverse counterfactual explanations incorporating domain knowledge and causal constraints
Artificial Intelligence
Makes AI suggestions practical and believable.
Counterfactual Language Reasoning for Explainable Recommendation Systems
Artificial Intelligence
Shows why a computer suggests things you might like.