Authority Backdoor: A Certifiable Backdoor Mechanism for Authoring DNNs
By: Han Yang , Shaofeng Li , Tian Dong and more
Potential Business Impact:
Locks AI so only owners can use it.
Deep Neural Networks (DNNs), as valuable intellectual property, face unauthorized use. Existing protections, such as digital watermarking, are largely passive; they provide only post-hoc ownership verification and cannot actively prevent the illicit use of a stolen model. This work proposes a proactive protection scheme, dubbed ``Authority Backdoor," which embeds access constraints directly into the model. In particular, the scheme utilizes a backdoor learning framework to intrinsically lock a model's utility, such that it performs normally only in the presence of a specific trigger (e.g., a hardware fingerprint). But in its absence, the DNN's performance degrades to be useless. To further enhance the security of the proposed authority scheme, the certifiable robustness is integrated to prevent an adaptive attacker from removing the implanted backdoor. The resulting framework establishes a secure authority mechanism for DNNs, combining access control with certifiable robustness against adversarial attacks. Extensive experiments on diverse architectures and datasets validate the effectiveness and certifiable robustness of the proposed framework.
Similar Papers
Cryptographic Backdoor for Neural Networks: Boon and Bane
Cryptography and Security
Protects smart programs from secret attacks.
AutoBackdoor: Automating Backdoor Attacks via LLM Agents
Cryptography and Security
Creates hidden tricks for AI that are hard to find.
Backdoor Attacks and Defenses in Computer Vision Domain: A Survey
Cryptography and Security
Protects smart cameras from secret tricks.