IGA-LWP: An Iterative Gradient-based Adversarial Attack for Link Weight Prediction
By: Cunlai Pu , Xingyu Gao , Jinbi Liang and more
Potential Business Impact:
Makes computer predictions about connections weaker.
Link weight prediction extends classical link prediction by estimating the strength of interactions rather than merely their existence, and it underpins a wide range of applications such as traffic engineering, social recommendation, and scientific collaboration analysis. However, the robustness of link weight prediction against adversarial perturbations remains largely unexplored.In this paper, we formalize the link weight prediction attack problem as an optimization task that aims to maximize the prediction error on a set of target links by adversarially manipulating the weight values of a limited number of links. Based on this formulation, we propose an iterative gradient-based attack framework for link weight prediction, termed IGA-LWP. By employing a self-attention-enhanced graph autoencoder as a surrogate predictor, IGA-LWP leverages backpropagated gradients to iteratively identify and perturb a small subset of links. Extensive experiments on four real-world weighted networks demonstrate that IGA-LWP significantly degrades prediction accuracy on target links compared with baseline methods. Moreover, the adversarial networks generated by IGA-LWP exhibit strong transferability across several representative link weight prediction models. These findings expose a fundamental vulnerability in weighted network inference and highlight the need for developing robust link weight prediction methods.
Similar Papers
LiSA: Leveraging Link Recommender to Attack Graph Neural Networks via Subgraph Injection
Machine Learning (CS)
Makes smart computer systems harder to trick.
IGAff: Benchmarking Adversarial Iterative and Genetic Affine Algorithms on Deep Neural Networks
CV and Pattern Recognition
Finds hidden flaws in AI image recognition.
The Gradient Puppeteer: Adversarial Domination in Gradient Leakage Attacks through Model Poisoning
Cryptography and Security
Steals all your private data from shared computer learning.