LLA: Enhancing Security and Privacy for Generative Models with Logic-Locked Accelerators
By: You Li , Guannan Zhao , Yuhao Ju and more
Potential Business Impact:
Protects AI art from being stolen.
We introduce LLA, an effective intellectual property (IP) protection scheme for generative AI models. LLA leverages the synergy between hardware and software to defend against various supply chain threats, including model theft, model corruption, and information leakage. On the software side, it embeds key bits into neurons that can trigger outliers to degrade performance and applies invariance transformations to obscure the key values. On the hardware side, it integrates a lightweight locking module into the AI accelerator while maintaining compatibility with various dataflow patterns and toolchains. An accelerator with a pre-stored secret key acts as a license to access the model services provided by the IP owner. The evaluation results show that LLA can withstand a broad range of oracle-guided key optimization attacks, while incurring a minimal computational overhead of less than 0.1% for 7,168 key bits.
Similar Papers
HAMLOCK: HArdware-Model LOgically Combined attacK
Cryptography and Security
Hides computer attacks in hardware and software.
DistilLock: Safeguarding LLMs from Unauthorized Knowledge Distillation on the Edge
Cryptography and Security
Keeps AI learning private on your device.
Efficient Kernel Mapping and Comprehensive System Evaluation of LLM Acceleration on a CGLA
Hardware Architecture
Makes AI run using much less electricity.