C-LEAD: Contrastive Learning for Enhanced Adversarial Defense
By: Suklav Ghosh, Sonal Kumar, Arijit Sur
Potential Business Impact:
Makes AI smarter and harder to trick.
Deep neural networks (DNNs) have achieved remarkable success in computer vision tasks such as image classification, segmentation, and object detection. However, they are vulnerable to adversarial attacks, which can cause incorrect predictions with small perturbations in input images. Addressing this issue is crucial for deploying robust deep-learning systems. This paper presents a novel approach that utilizes contrastive learning for adversarial defense, a previously unexplored area. Our method leverages the contrastive loss function to enhance the robustness of classification models by training them with both clean and adversarially perturbed images. By optimizing the model's parameters alongside the perturbations, our approach enables the network to learn robust representations that are less susceptible to adversarial attacks. Experimental results show significant improvements in the model's robustness against various types of adversarial perturbations. This suggests that contrastive loss helps extract more informative and resilient features, contributing to the field of adversarial robustness in deep learning.
Similar Papers
Defense That Attacks: How Robust Models Become Better Attackers
CV and Pattern Recognition
Makes AI easier to trick with fake images.
Defense That Attacks: How Robust Models Become Better Attackers
CV and Pattern Recognition
Makes computer "eyes" easier for hackers to trick.
A Generative Adversarial Approach to Adversarial Attacks Guided by Contrastive Language-Image Pre-trained Model
CV and Pattern Recognition
Makes AI fooled by tiny, hidden changes.