Detecting Semantic Backdoors in a Mystery Shopping Scenario
By: Arpad Berta , Gabor Danner , Istvan Hegedus and more
Potential Business Impact:
Find hidden tricks in AI programs.
Detecting semantic backdoors in classification models--where some classes can be activated by certain natural, but out-of-distribution inputs--is an important problem that has received relatively little attention. Semantic backdoors are significantly harder to detect than backdoors that are based on trigger patterns due to the lack of such clearly identifiable patterns. We tackle this problem under the assumption that the clean training dataset and the training recipe of the model are both known. These assumptions are motivated by a consumer protection scenario, in which the responsible authority performs mystery shopping to test a machine learning service provider. In this scenario, the authority uses the provider's resources and tools to train a model on a given dataset and tests whether the provider included a backdoor. In our proposed approach, the authority creates a reference model pool by training a small number of clean and poisoned models using trusted infrastructure, and calibrates a model distance threshold to identify clean models. We propose and experimentally analyze a number of approaches to compute model distances and we also test a scenario where the provider performs an adaptive attack to avoid detection. The most reliable method is based on requesting adversarial training from the provider. The model distance is best measured using a set of input samples generated by inverting the models in such a way as to maximize the distance from clean samples. With these settings, our method can often completely separate clean and poisoned models, and it proves to be superior to state-of-the-art backdoor detectors as well.
Similar Papers
Detecting Backdoor Attacks via Similarity in Semantic Communication Systems
Cryptography and Security
Stops AI spies from tricking communication systems.
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Computation and Language
Finds hidden meanings that trick AI.
Variance-Based Defense Against Blended Backdoor Attacks
Machine Learning (CS)
Finds hidden tricks in AI training data.