Score: 2

Measuring What LLMs Think They Do: SHAP Faithfulness and Deployability on Financial Tabular Classification

Published: November 28, 2025 | arXiv ID: 2512.00163v1

By: Saeed AlMarri , Mathieu Ravaut , Kristof Juhasz and more

Potential Business Impact:

Makes AI explain financial risks more honestly.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) have attracted significant attention for classification tasks, offering a flexible alternative to trusted classical machine learning models like LightGBM through zero-shot prompting. However, their reliability for structured tabular data remains unclear, particularly in high stakes applications like financial risk assessment. Our study systematically evaluates LLMs and generates their SHAP values on financial classification tasks. Our analysis shows a divergence between LLMs self-explanation of feature impact and their SHAP values, as well as notable differences between LLMs and LightGBM SHAP values. These findings highlight the limitations of LLMs as standalone classifiers for structured financial modeling, but also instill optimism that improved explainability mechanisms coupled with few-shot prompting will make LLMs usable in risk-sensitive domains.


Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)