A New Type of Adversarial Examples
By: Xingyang Nie , Guojie Xiao , Su Pan and more
Potential Business Impact:
Makes computers think wrong answers are right.
Most machine learning models are vulnerable to adversarial examples, which poses security concerns on these models. Adversarial examples are crafted by applying subtle but intentionally worst-case modifications to examples from the dataset, leading the model to output a different answer from the original example. In this paper, adversarial examples are formed in an exactly opposite manner, which are significantly different from the original examples but result in the same answer. We propose a novel set of algorithms to produce such adversarial examples, including the negative iterative fast gradient sign method (NI-FGSM) and the negative iterative fast gradient method (NI-FGM), along with their momentum variants: the negative momentum iterative fast gradient sign method (NMI-FGSM) and the negative momentum iterative fast gradient method (NMI-FGM). Adversarial examples constructed by these methods could be used to perform an attack on machine learning systems in certain occasions. Moreover, our results show that the adversarial examples are not merely distributed in the neighbourhood of the examples from the dataset; instead, they are distributed extensively in the sample space.
Similar Papers
Analysis of the vulnerability of machine learning regression models to adversarial attacks using data from 5G wireless networks
Cryptography and Security
Finds fake data that tricks computers.
destroR: Attacking Transfer Models with Obfuscous Examples to Discard Perplexity
Computation and Language
Tricks smart computer programs to make mistakes.
Deep learning models are vulnerable, but adversarial examples are even more vulnerable
CV and Pattern Recognition
Makes AI better at spotting fake images.