Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis
By: Thivya Thogesan, Anupiya Nugaliyadde, Kok Wai Wong
Potential Business Impact:
Shows how computers understand feelings, layer by layer.
Interpretability remains a key difficulty in sentiment analysis with Large Language Models (LLMs), particularly in high-stakes applications where it is crucial to comprehend the rationale behind forecasts. This research addressed this by introducing a technique that applies SHAP (Shapley Additive Explanations) by breaking down LLMs into components such as embedding layer,encoder,decoder and attention layer to provide a layer-by-layer knowledge of sentiment prediction. The approach offers a clearer overview of how model interpret and categorise sentiment by breaking down LLMs into these parts. The method is evaluated using the Stanford Sentiment Treebank (SST-2) dataset, which shows how different sentences affect different layers. The effectiveness of layer-wise SHAP analysis in clarifying sentiment-specific token attributions is demonstrated by experimental evaluations, which provide a notable enhancement over current whole-model explainability techniques. These results highlight how the suggested approach could improve the reliability and transparency of LLM-based sentiment analysis in crucial applications.
Similar Papers
Utilizing Large Language Models for Machine Learning Explainability
Machine Learning (CS)
AI builds smart computer programs that explain themselves.
ContextualSHAP : Enhancing SHAP Explanations Through Contextual Language Generation
Artificial Intelligence
Explains AI decisions in simple words for everyone.
An Interpretability-Guided Framework for Responsible Synthetic Data Generation in Emotional Text
Machine Learning (CS)
Creates fake social media posts to train AI.