Defending against adversarial attacks using mixture of experts
By: Mohammad Meymani, Roozbeh Razavi-Far
Machine learning is a powerful tool enabling full automation of a huge number of tasks without explicit programming. Despite recent progress of machine learning in different domains, these models have shown vulnerabilities when they are exposed to adversarial threats. Adversarial threats aim to hinder the machine learning models from satisfying their objectives. They can create adversarial perturbations, which are imperceptible to humans' eyes but have the ability to cause misclassification during inference. Moreover, they can poison the training data to harm the model's performance or they can query the model to steal its sensitive information. In this paper, we propose a defense system, which devises an adversarial training module within mixture-of-experts architecture to enhance its robustness against adversarial threats. In our proposed defense system, we use nine pre-trained experts with ResNet-18 as their backbone. During end-to-end training, the parameters of expert models and gating mechanism are jointly updated allowing further optimization of the experts. Our proposed defense system outperforms state-of-the-art defense systems and plain classifiers, which use a more complex architecture than our model's backbone.
Similar Papers
Adversarially-Aware Architecture Design for Robust Medical AI Systems
Machine Learning (CS)
Protects AI from tricks that harm patients.
A unified Bayesian framework for adversarial robustness
Machine Learning (Stat)
Protects computer brains from sneaky tricks.
Backdoor or Manipulation? Graph Mixture of Experts Can Defend Against Various Graph Adversarial Attacks
Machine Learning (CS)
Protects smart computer networks from sneaky attacks.