Score: 1

An Early Experience with Confidential Computing Architecture for On-Device Model Protection

Published: April 11, 2025 | arXiv ID: 2504.08508v1

By: Sina Abdollahi , Mohammad Maheri , Sandra Siby and more

Potential Business Impact:

Keeps phone AI private and fast.

Business Areas:
Cloud Security Information Technology, Privacy and Security

Deploying machine learning (ML) models on user devices can improve privacy (by keeping data local) and reduce inference latency. Trusted Execution Environments (TEEs) are a practical solution for protecting proprietary models, yet existing TEE solutions have architectural constraints that hinder on-device model deployment. Arm Confidential Computing Architecture (CCA), a new Arm extension, addresses several of these limitations and shows promise as a secure platform for on-device ML. In this paper, we evaluate the performance-privacy trade-offs of deploying models within CCA, highlighting its potential to enable confidential and efficient ML applications. Our evaluations show that CCA can achieve an overhead of, at most, 22% in running models of different sizes and applications, including image classification, voice recognition, and chat assistants. This performance overhead comes with privacy benefits; for example, our framework can successfully protect the model against membership inference attack by an 8.3% reduction in the adversary's success rate. To support further research and early adoption, we make our code and methodology publicly available.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Cryptography and Security