Adversarial Robustness in Distributed Quantum Machine Learning
By: Pouya Kananian, Hans-Arno Jacobsen
Potential Business Impact:
Makes quantum computers safer from hackers.
Studying adversarial robustness of quantum machine learning (QML) models is essential in order to understand their potential advantages over classical models and build trustworthy systems. Distributing QML models allows leveraging multiple quantum processors to overcome the limitations of individual devices and build scalable systems. However, this distribution can affect their adversarial robustness, potentially making them more vulnerable to new attacks. Key paradigms in distributed QML include federated learning, which, similar to classical models, involves training a shared model on local data and sending only the model updates, as well as circuit distribution methods inherent to quantum computing, such as circuit cutting and teleportation-based techniques. These quantum-specific methods enable the distributed execution of quantum circuits across multiple devices. This work reviews the differences between these distribution methods, summarizes existing approaches on the adversarial robustness of QML models when distributed using each paradigm, and discusses open questions in this area.
Similar Papers
Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
Cryptography and Security
Makes quantum computers safer from hackers.
Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
Cryptography and Security
Makes quantum computers safer from hackers.
Trustworthy Quantum Machine Learning: A Roadmap for Reliability, Robustness, and Security in the NISQ Era
Quantum Physics
Makes quantum computers safer for important jobs.