On the Trade-Off Between Transparency and Security in Adversarial Machine Learning
By: Lucas Fenaux, Christopher Srinivasa, Florian Kerschbaum
Potential Business Impact:
Makes AI safer by hiding its secrets.
Transparency and security are both central to Responsible AI, but they may conflict in adversarial settings. We investigate the strategic effect of transparency for agents through the lens of transferable adversarial example attacks. In transferable adversarial example attacks, attackers maliciously perturb their inputs using surrogate models to fool a defender's target model. These models can be defended or undefended, with both players having to decide which to use. Using a large-scale empirical evaluation of nine attacks across 181 models, we find that attackers are more successful when they match the defender's decision; hence, obscurity could be beneficial to the defender. With game theory, we analyze this trade-off between transparency and security by modeling this problem as both a Nash game and a Stackelberg game, and comparing the expected outcomes. Our analysis confirms that only knowing whether a defender's model is defended or not can sometimes be enough to damage its security. This result serves as an indicator of the general trade-off between transparency and security, suggesting that transparency in AI systems can be at odds with security. Beyond adversarial machine learning, our work illustrates how game-theoretic reasoning can uncover conflicts between transparency and security.
Similar Papers
The Pitfalls of "Security by Obscurity" And What They Mean for Transparent AI
Cryptography and Security
Makes AI systems safer by learning from computer security.
Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?
Machine Learning (CS)
Lets people fix computer guesses, making them happier.
Self-Transparency Failures in Expert-Persona LLMs: A Large-Scale Behavioral Audit
Artificial Intelligence
AI models hide when they are experts.