Deep opacity and AI: A threat to XAI and to privacy protection mechanisms
By: Vincent C. Müller
Potential Business Impact:
Makes AI explain itself to protect privacy.
It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of "black box problem" in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does ("shallow opacity"), 2) the analysts do not know what the system does ("standard black box opacity"), or 3) the analysts cannot possibly know what the system might do ("deep opacity"). If the agents, data subjects as well as analytics experts, operate under opacity, then these agents cannot provide justifications for judgments that are necessary to protect privacy, e.g., they cannot give "informed consent", or guarantee "anonymity". It follows from these points that agents in big data analytics and AI often cannot make the judgments needed to protect privacy. So I conclude that big data analytics makes the privacy problems worse and the remedies less effective. As a positive note, I provide a brief outlook on technical ways to handle this situation.
Similar Papers
Explainability of Algorithms
Machine Learning (CS)
Helps understand how AI makes decisions.
A moving target in AI-assisted decision-making: Dataset shift, model updating, and the problem of update opacity
Computers and Society
Keeps AI smart as it learns new things.
The Impact of Transparency in AI Systems on Users' Data-Sharing Intentions: A Scenario-Based Experiment
Machine Learning (CS)
Trust, not knowing how it works, makes people share data.