Score: 0

LLA: Enhancing Security and Privacy for Generative Models with Logic-Locked Accelerators

Published: December 26, 2025 | arXiv ID: 2512.22307v1

By: You Li , Guannan Zhao , Yuhao Ju and more

Potential Business Impact:

Protects AI art from being stolen.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We introduce LLA, an effective intellectual property (IP) protection scheme for generative AI models. LLA leverages the synergy between hardware and software to defend against various supply chain threats, including model theft, model corruption, and information leakage. On the software side, it embeds key bits into neurons that can trigger outliers to degrade performance and applies invariance transformations to obscure the key values. On the hardware side, it integrates a lightweight locking module into the AI accelerator while maintaining compatibility with various dataflow patterns and toolchains. An accelerator with a pre-stored secret key acts as a license to access the model services provided by the IP owner. The evaluation results show that LLA can withstand a broad range of oracle-guided key optimization attacks, while incurring a minimal computational overhead of less than 0.1% for 7,168 key bits.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
9 pages

Category
Computer Science:
Cryptography and Security