Differentially Private Federated Quantum Learning via Quantum Noise
By: Atit Pokharel , Ratun Rahman , Shaba Shaon and more
Potential Business Impact:
Protects secret quantum computer training data.
Quantum federated learning (QFL) enables collaborative training of quantum machine learning (QML) models across distributed quantum devices without raw data exchange. However, QFL remains vulnerable to adversarial attacks, where shared QML model updates can be exploited to undermine information privacy. In the context of noisy intermediate-scale quantum (NISQ) devices, a key question arises: How can inherent quantum noise be leveraged to enforce differential privacy (DP) and protect model information during training and communication? This paper explores a novel DP mechanism that harnesses quantum noise to safeguard quantum models throughout the QFL process. By tuning noise variance through measurement shots and depolarizing channel strength, our approach achieves desired DP levels tailored to NISQ constraints. Simulations demonstrate the framework's effectiveness by examining the relationship between differential privacy budget and noise parameters, as well as the trade-off between security and training accuracy. Additionally, we demonstrate the framework's robustness against an adversarial attack designed to compromise model performance using adversarial examples, with evaluations based on critical metrics such as accuracy on adversarial examples, confidence scores for correct predictions, and attack success rates. The results reveal a tunable trade-off between privacy and robustness, providing an efficient solution for secure QFL on NISQ devices with significant potential for reliable quantum computing applications.
Similar Papers
Enhancing Gradient Variance and Differential Privacy in Quantum Federated Learning
Quantum Physics
Makes AI learn better and safer with less noise.
Noise-Resilient Quantum Aggregation on NISQ for Federated ADAS Learning
Machine Learning (CS)
Cars learn together safely, even with bad connections.
Scaling Trust in Quantum Federated Learning: A Multi-Protocol Privacy Design
Cryptography and Security
Keeps private data safe during AI training.