Experimental robustness benchmark of quantum neural network on a superconducting quantum processor
By: Hai-Feng Zhang , Zhao-Yun Chen , Peng Wang and more
Potential Business Impact:
Makes quantum computers safer from hackers.
Quantum machine learning (QML) models, like their classical counterparts, are vulnerable to adversarial attacks, hindering their secure deployment. Here, we report the first systematic experimental robustness benchmark for 20-qubit quantum neural network (QNN) classifiers executed on a superconducting processor. Our benchmarking framework features an efficient adversarial attack algorithm designed for QNNs, enabling quantitative characterization of adversarial robustness and robustness bounds. From our analysis, we verify that adversarial training reduces sensitivity to targeted perturbations by regularizing input gradients, significantly enhancing QNN's robustness. Additionally, our analysis reveals that QNNs exhibit superior adversarial robustness compared to classical neural networks, an advantage attributed to inherent quantum noise. Furthermore, the empirical upper bound extracted from our attack experiments shows a minimal deviation ($3 \times 10^{-3}$) from the theoretical lower bound, providing strong experimental confirmation of the attack's effectiveness and the tightness of fidelity-based robustness bounds. This work establishes a critical experimental framework for assessing and improving quantum adversarial robustness, paving the way for secure and reliable QML applications.
Similar Papers
Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
Cryptography and Security
Makes quantum computers safer from hackers.
Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
Cryptography and Security
Makes quantum computers safer from hackers.
Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to Adversarial Attacks
Machine Learning (CS)
Makes smart programs smaller and safer from tricks.