ContextualSHAP : Enhancing SHAP Explanations Through Contextual Language Generation
By: Latifa Dwiyanti, Sergio Ryan Wibisono, Hidetaka Nambo
Potential Business Impact:
Explains AI decisions in simple words for everyone.
Explainable Artificial Intelligence (XAI) has become an increasingly important area of research, particularly as machine learning models are deployed in high-stakes domains. Among various XAI approaches, SHAP (SHapley Additive exPlanations) has gained prominence due to its ability to provide both global and local explanations across different machine learning models. While SHAP effectively visualizes feature importance, it often lacks contextual explanations that are meaningful for end-users, especially those without technical backgrounds. To address this gap, we propose a Python package that extends SHAP by integrating it with a large language model (LLM), specifically OpenAI's GPT, to generate contextualized textual explanations. This integration is guided by user-defined parameters (such as feature aliases, descriptions, and additional background) to tailor the explanation to both the model context and the user perspective. We hypothesize that this enhancement can improve the perceived understandability of SHAP explanations. To evaluate the effectiveness of the proposed package, we applied it in a healthcare-related case study and conducted user evaluations involving real end-users. The results, based on Likert-scale surveys and follow-up interviews, indicate that the generated explanations were perceived as more understandable and contextually appropriate compared to visual-only outputs. While the findings are preliminary, they suggest that combining visualization with contextualized text may support more user-friendly and trustworthy model explanations.
Similar Papers
Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis
Computation and Language
Shows how computers understand feelings, layer by layer.
Privacy-Preserving Explainable AIoT Application via SHAP Entropy Regularization
Cryptography and Security
Protects smart home secrets from AI.
From Black Box to Transparency: Enhancing Automated Interpreting Assessment with Explainable AI in College Classrooms
Computation and Language
Helps computers judge translation quality better.