Score: 1

Authority Backdoor: A Certifiable Backdoor Mechanism for Authoring DNNs

Published: December 11, 2025 | arXiv ID: 2512.10600v1

By: Han Yang , Shaofeng Li , Tian Dong and more

Potential Business Impact:

Locks AI so only owners can use it.

Business Areas:
Identity Management Information Technology, Privacy and Security

Deep Neural Networks (DNNs), as valuable intellectual property, face unauthorized use. Existing protections, such as digital watermarking, are largely passive; they provide only post-hoc ownership verification and cannot actively prevent the illicit use of a stolen model. This work proposes a proactive protection scheme, dubbed ``Authority Backdoor," which embeds access constraints directly into the model. In particular, the scheme utilizes a backdoor learning framework to intrinsically lock a model's utility, such that it performs normally only in the presence of a specific trigger (e.g., a hardware fingerprint). But in its absence, the DNN's performance degrades to be useless. To further enhance the security of the proposed authority scheme, the certifiable robustness is integrated to prevent an adaptive attacker from removing the implanted backdoor. The resulting framework establishes a secure authority mechanism for DNNs, combining access control with certifiable robustness against adversarial attacks. Extensive experiments on diverse architectures and datasets validate the effectiveness and certifiable robustness of the proposed framework.

Country of Origin
🇭🇰 Hong Kong

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Cryptography and Security