Score: 1

Beyond Verification: Abductive Explanations for Post-AI Assessment of Privacy Leakage

Published: November 13, 2025 | arXiv ID: 2511.10284v1

By: Belona Sonna, Alban Grastien, Claire Benn

Potential Business Impact:

Finds if AI shares your secrets by mistake.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Privacy leakage in AI-based decision processes poses significant risks, particularly when sensitive information can be inferred. We propose a formal framework to audit privacy leakage using abductive explanations, which identifies minimal sufficient evidence justifying model decisions and determines whether sensitive information disclosed. Our framework formalizes both individual and system-level leakage, introducing the notion of Potentially Applicable Explanations (PAE) to identify individuals whose outcomes can shield those with sensitive features. This approach provides rigorous privacy guarantees while producing human understandable explanations, a key requirement for auditing tools. Experimental evaluation on the German Credit Dataset illustrates how the importance of sensitive literal in the model decision process affects privacy leakage. Despite computational challenges and simplifying assumptions, our results demonstrate that abductive reasoning enables interpretable privacy auditing, offering a practical pathway to reconcile transparency, model interpretability, and privacy preserving in AI decision-making.

Country of Origin
🇦🇺 🇬🇧 Australia, United Kingdom

Page Count
10 pages

Category
Computer Science:
Artificial Intelligence