destroR: Attacking Transfer Models with Obfuscous Examples to Discard Perplexity
By: Saadat Rafid Ahmed , Rubayet Shareen , Radoan Sharkar and more
Potential Business Impact:
Tricks smart computer programs to make mistakes.
Advancements in Machine Learning & Neural Networks in recent years have led to widespread implementations of Natural Language Processing across a variety of fields with remarkable success, solving a wide range of complicated problems. However, recent research has shown that machine learning models may be vulnerable in a number of ways, putting both the models and the systems theyre used in at risk. In this paper, we intend to analyze and experiment with the best of existing adversarial attack recipes and create new ones. We concentrated on developing a novel adversarial attack strategy on current state-of-the-art machine learning models by producing ambiguous inputs for the models to confound them and then constructing the path to the future development of the robustness of the models. We will develop adversarial instances with maximum perplexity, utilizing machine learning and deep learning approaches in order to trick the models. In our attack recipe, we will analyze several datasets and focus on creating obfuscous adversary examples to put the models in a state of perplexity, and by including the Bangla Language in the field of adversarial attacks. We strictly uphold utility usage reduction and efficiency throughout our work.
Similar Papers
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Computation and Language
Makes AI models confidently give wrong answers.
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Computation and Language
Makes smart AI systems confidently give wrong answers.
Defense That Attacks: How Robust Models Become Better Attackers
CV and Pattern Recognition
Makes AI easier to trick with fake images.