Budgeted Adversarial Attack against Graph-Based Anomaly Detection in Sensor Networks
By: Sanju Xaviar, Omid Ardakanian
Potential Business Impact:
Tricks smart sensors to miss or fake problems.
Graph Neural Networks (GNNs) have emerged as powerful models for anomaly detection in sensor networks, particularly when analyzing multivariate time series. In this work, we introduce BETA, a novel grey-box evasion attack targeting such GNN-based detectors, where the attacker is constrained to perturb sensor readings from a limited set of nodes, excluding the target sensor, with the goal of either suppressing a true anomaly or triggering a false alarm at the target node. BETA identifies the sensors most influential to the target node's classification and injects carefully crafted adversarial perturbations into their features, all while maintaining stealth and respecting the attacker's budget. Experiments on three real-world sensor network datasets show that BETA reduces the detection accuracy of state-of-the-art GNN-based detectors by 30.62 to 39.16% on average, and significantly outperforms baseline attack strategies, while operating within realistic constraints.
Similar Papers
$β$-GNN: A Robust Ensemble Approach Against Graph Structure Perturbation
Machine Learning (CS)
Makes computer programs stronger against mistakes.
Stealthy Yet Effective: Distribution-Preserving Backdoor Attacks on Graph Classification
Machine Learning (CS)
Hides secret messages in computer networks.
Unifying Adversarial Perturbation for Graph Neural Networks
Machine Learning (CS)
Makes smart computer networks harder to trick.