Unlocking the Black Box: A Five-Dimensional Framework for Evaluating Explainable AI in Credit Risk
By: Rongbin Ye, Jiaqi Chen
Potential Business Impact:
Helps banks use smart computer models safely.
The financial industry faces a significant challenge modeling and risk portfolios: balancing the predictability of advanced machine learning models, neural network models, and explainability required by regulatory entities (such as Office of the Comptroller of the Currency, Consumer Financial Protection Bureau). This paper intends to fill the gap in the application between these "black box" models and explainability frameworks, such as LIME and SHAP. Authors elaborate on the application of these frameworks on different models and demonstrates the more complex models with better prediction powers could be applied and reach the same level of the explainability, using SHAP and LIME. Beyond the comparison and discussion of performances, this paper proposes a novel five dimensional framework evaluating Inherent Interpretability, Global Explanations, Local Explanations, Consistency, and Complexity to offer a nuanced method for assessing and comparing model explainability beyond simple accuracy metrics. This research demonstrates the feasibility of employing sophisticated, high performing ML models in regulated financial environments by utilizing modern explainability techniques and provides a structured approach to evaluate the crucial trade offs between model performance and interpretability.
Similar Papers
Enhancing ML Models Interpretability for Credit Scoring
Machine Learning (CS)
Helps banks predict loan risk with clear rules.
From Black Box to Transparency: Enhancing Automated Interpreting Assessment with Explainable AI in College Classrooms
Computation and Language
Helps computers judge translation quality better.
Fair and Explainable Credit-Scoring under Concept Drift: Adaptive Explanation Frameworks for Evolving Populations
Machine Learning (CS)
Keeps loan decisions fair as data changes.