Breaking SafetyCore: Exploring the Risks of On-Device AI Deployment
By: Victor Guyomard, Mathis Mauvisseau, Marie Paindavoine
Potential Business Impact:
Hackers steal and break phone's private AI.
Due to hardware and software improvements, an increasing number of AI models are deployed on-device. This shift enhances privacy and reduces latency, but also introduces security risks distinct from traditional software. In this article, we examine these risks through the real-world case study of SafetyCore, an Android system service incorporating sensitive image content detection. We demonstrate how the on-device AI model can be extracted and manipulated to bypass detection, effectively rendering the protection ineffective. Our analysis exposes vulnerabilities of on-device AI models and provides a practical demonstration of how adversaries can exploit them.
Similar Papers
Systems-Theoretic and Data-Driven Security Analysis in ML-enabled Medical Devices
Cryptography and Security
Makes smart medical tools safer from hackers.
Hardware optimization on Android for inference of AI models
Machine Learning (CS)
Makes phone AI apps run much faster.
Empowering Edge Intelligence: A Comprehensive Survey on On-Device AI Models
Artificial Intelligence
Puts smart computer brains on your phone.