Score: 1

Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis

Published: March 15, 2025 | arXiv ID: 2503.11948v1

By: Thivya Thogesan, Anupiya Nugaliyadde, Kok Wai Wong

Potential Business Impact:

Shows how computers understand feelings, layer by layer.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Interpretability remains a key difficulty in sentiment analysis with Large Language Models (LLMs), particularly in high-stakes applications where it is crucial to comprehend the rationale behind forecasts. This research addressed this by introducing a technique that applies SHAP (Shapley Additive Explanations) by breaking down LLMs into components such as embedding layer,encoder,decoder and attention layer to provide a layer-by-layer knowledge of sentiment prediction. The approach offers a clearer overview of how model interpret and categorise sentiment by breaking down LLMs into these parts. The method is evaluated using the Stanford Sentiment Treebank (SST-2) dataset, which shows how different sentences affect different layers. The effectiveness of layer-wise SHAP analysis in clarifying sentiment-specific token attributions is demonstrated by experimental evaluations, which provide a notable enhancement over current whole-model explainability techniques. These results highlight how the suggested approach could improve the reliability and transparency of LLM-based sentiment analysis in crucial applications.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language