Score: 0

SHLIME: Foiling adversarial attacks fooling SHAP and LIME

Published: August 14, 2025 | arXiv ID: 2508.11053v1

By: Sam Chauhan , Estelle Duguet , Karthik Ramakrishnan and more

Potential Business Impact:

Finds hidden unfairness in AI decisions.

Post hoc explanation methods, such as LIME and SHAP, provide interpretable insights into black-box classifiers and are increasingly used to assess model biases and generalizability. However, these methods are vulnerable to adversarial manipulation, potentially concealing harmful biases. Building on the work of Slack et al. (2020), we investigate the susceptibility of LIME and SHAP to biased models and evaluate strategies for improving robustness. We first replicate the original COMPAS experiment to validate prior findings and establish a baseline. We then introduce a modular testing framework enabling systematic evaluation of augmented and ensemble explanation approaches across classifiers of varying performance. Using this framework, we assess multiple LIME/SHAP ensemble configurations on out-of-distribution models, comparing their resistance to bias concealment against the original methods. Our results identify configurations that substantially improve bias detection, highlighting their potential for enhancing transparency in the deployment of high-stakes machine learning systems.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)