Backdoor or Manipulation? Graph Mixture of Experts Can Defend Against Various Graph Adversarial Attacks
By: Yuyuan Feng, Bin Ma, Enyan Dai
Potential Business Impact:
Protects smart computer networks from sneaky attacks.
Extensive research has highlighted the vulnerability of graph neural networks (GNNs) to adversarial attacks, including manipulation, node injection, and the recently emerging threat of backdoor attacks. However, existing defenses typically focus on a single type of attack, lacking a unified approach to simultaneously defend against multiple threats. In this work, we leverage the flexibility of the Mixture of Experts (MoE) architecture to design a scalable and unified framework for defending against backdoor, edge manipulation, and node injection attacks. Specifically, we propose an MI-based logic diversity loss to encourage individual experts to focus on distinct neighborhood structures in their decision processes, thus ensuring a sufficient subset of experts remains unaffected under perturbations in local structures. Moreover, we introduce a robustness-aware router that identifies perturbation patterns and adaptively routes perturbed nodes to corresponding robust experts. Extensive experiments conducted under various adversarial settings demonstrate that our method consistently achieves superior robustness against multiple graph adversarial attacks.
Similar Papers
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
Cryptography and Security
Makes AI models do bad things when told.
Training Diverse Graph Experts for Ensembles: A Systematic Empirical Study
Machine Learning (CS)
Makes computer networks smarter by mixing many experts.
BadPatches: Backdoor Attacks Against Patch-based Mixture of Experts Architectures
Cryptography and Security
Makes smart computer pictures easily tricked.