Measuring What LLMs Think They Do: SHAP Faithfulness and Deployability on Financial Tabular Classification
By: Saeed AlMarri , Mathieu Ravaut , Kristof Juhasz and more
Potential Business Impact:
Makes AI explain financial risks more honestly.
Large Language Models (LLMs) have attracted significant attention for classification tasks, offering a flexible alternative to trusted classical machine learning models like LightGBM through zero-shot prompting. However, their reliability for structured tabular data remains unclear, particularly in high stakes applications like financial risk assessment. Our study systematically evaluates LLMs and generates their SHAP values on financial classification tasks. Our analysis shows a divergence between LLMs self-explanation of feature impact and their SHAP values, as well as notable differences between LLMs and LightGBM SHAP values. These findings highlight the limitations of LLMs as standalone classifiers for structured financial modeling, but also instill optimism that improved explainability mechanisms coupled with few-shot prompting will make LLMs usable in risk-sensitive domains.
Similar Papers
Interpreting LLMs as Credit Risk Classifiers: Do Their Feature Explanations Align with Classical ML?
Computation and Language
Helps banks predict loan defaults better.
Just Because You Can, Doesn't Mean You Should: LLMs for Data Fitting
Machine Learning (CS)
Computers change answers if you rename data.
Utilizing Large Language Models for Machine Learning Explainability
Machine Learning (CS)
AI builds smart computer programs that explain themselves.