"We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe
By: Alexandra Klymenko , Stephen Meisenbacher , Patrick Gage Kelley and more
Potential Business Impact:
AI builders don't agree on privacy dangers.
The proliferation of AI has sparked privacy concerns related to training data, model interfaces, downstream applications, and more. We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses and what protective strategies, if any, would help to mitigate them. We find that there is little consensus among AI developers on the relative ranking of privacy risks. These differences stem from salient reasoning patterns that often relate to human rather than purely technical factors. Furthermore, while AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption. Our findings highlight both gaps and opportunities for empowering AI developers to better address privacy risks in AI.
Similar Papers
How Are We Doing With Using AI-Based Programming Assistants For Privacy-Related Code Generation? The Developers' Experience
Software Engineering
AI helps make apps more private and secure.
AI For Privacy in Smart Homes: Exploring How Leveraging AI-Powered Smart Devices Enhances Privacy Protection
Human-Computer Interaction
AI helps smart homes protect your private information.
Understanding Users' Security and Privacy Concerns and Attitudes Towards Conversational AI Platforms
Cryptography and Security
Users want AI to be safer and more private.