How Data Quality Affects Machine Learning Models for Credit Risk Assessment
By: Andrea Maurino
Potential Business Impact:
Makes loan decisions more accurate even with bad data.
Machine Learning (ML) models are being increasingly employed for credit risk evaluation, with their effectiveness largely hinging on the quality of the input data. In this paper we investigate the impact of several data quality issues, including missing values, noisy attributes, outliers, and label errors, on the predictive accuracy of the machine learning model used in credit risk assessment. Utilizing an open-source dataset, we introduce controlled data corruption using the Pucktrick library to assess the robustness of 10 frequently used models like Random Forest, SVM, and Logistic Regression and so on. Our experiments show significant differences in model robustness based on the nature and severity of the data degradation. Moreover, the proposed methodology and accompanying tools offer practical support for practitioners seeking to enhance data pipeline robustness, and provide researchers with a flexible framework for further experimentation in data-centric AI contexts.
Similar Papers
Data Quality Issues in Flare Prediction using Machine Learning Models
Solar and Stellar Astrophysics
Fixes bad space data to predict solar flares better.
A comparative analysis of machine learning algorithms for predicting probabilities of default
Risk Management
Helps banks guess if people will repay loans.
R+R: Security Vulnerability Dataset Quality Is Critical
Software Engineering
Fixes computer code errors more accurately.