SHLIME: Foiling adversarial attacks fooling SHAP and LIME
By: Sam Chauhan , Estelle Duguet , Karthik Ramakrishnan and more
Potential Business Impact:
Finds hidden unfairness in AI decisions.
Post hoc explanation methods, such as LIME and SHAP, provide interpretable insights into black-box classifiers and are increasingly used to assess model biases and generalizability. However, these methods are vulnerable to adversarial manipulation, potentially concealing harmful biases. Building on the work of Slack et al. (2020), we investigate the susceptibility of LIME and SHAP to biased models and evaluate strategies for improving robustness. We first replicate the original COMPAS experiment to validate prior findings and establish a baseline. We then introduce a modular testing framework enabling systematic evaluation of augmented and ensemble explanation approaches across classifiers of varying performance. Using this framework, we assess multiple LIME/SHAP ensemble configurations on out-of-distribution models, comparing their resistance to bias concealment against the original methods. Our results identify configurations that substantially improve bias detection, highlighting their potential for enhancing transparency in the deployment of high-stakes machine learning systems.
Similar Papers
A Method for Evaluating the Interpretability of Machine Learning Models in Predicting Bond Default Risk Based on LIME and SHAP
General Finance
Helps understand how smart computer programs make choices.
ABLE: Using Adversarial Pairs to Construct Local Models for Explaining Model Predictions
Machine Learning (CS)
Explains how "black box" computer decisions work.
Misinformation Detection using Large Language Models with Explainability
Computation and Language
Finds fake news online and shows why.