Score: 2

DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage

Published: January 6, 2026 | arXiv ID: 2601.03429v1

By: Firas Ben Hmida , Zain Sbeih , Philemon Hailemariam and more

Potential Business Impact:

Protects private data when AI explains its choices.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Machine learning (ML) explainability is central to algorithmic transparency in high-stakes settings such as predictive diagnostics and loan approval. However, these same domains require rigorous privacy guaranties, creating tension between interpretability and privacy. Although prior work has shown that explanation methods can leak membership information, practitioners still lack systematic guidance on selecting or deploying explanation techniques that balance transparency with privacy. We present DeepLeak, a system to audit and mitigate privacy risks in post-hoc explanation methods. DeepLeak advances the state-of-the-art in three ways: (1) comprehensive leakage profiling: we develop a stronger explanation-aware membership inference attack (MIA) to quantify how much representative explanation methods leak membership information under default configurations; (2) lightweight hardening strategies: we introduce practical, model-agnostic mitigations, including sensitivity-calibrated noise, attribution clipping, and masking, that substantially reduce membership leakage while preserving explanation utility; and (3) root-cause analysis: through controlled experiments, we pinpoint algorithmic properties (e.g., attribution sparsity and sensitivity) that drive leakage. Evaluating 15 explanation techniques across four families on image benchmarks, DeepLeak shows that default settings can leak up to 74.9% more membership information than previously reported. Our mitigations cut leakage by up to 95% (minimum 46.5%) with only <=3.3% utility loss on average. DeepLeak offers a systematic, reproducible path to safer explainability in privacy-sensitive ML.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Cryptography and Security