Superior resilience to poisoning and amenability to unlearning in quantum machine learning
By: Yu-Qin Chen, Shi-Xin Zhang
Potential Business Impact:
Quantum computers forget bad data better.
The reliability of artificial intelligence hinges on the integrity of its training data, a foundation often compromised by noise and corruption. Here, through a comparative study of classical and quantum neural networks on both classical and quantum data, we reveal a fundamental difference in their response to data corruption. We find that classical models exhibit brittle memorization, leading to a failure in generalization. In contrast, quantum models demonstrate remarkable resilience, which is underscored by a phase transition-like response to increasing label noise, revealing a critical point beyond which the model's performance changes qualitatively. We further establish and investigate the field of quantum machine unlearning, the process of efficiently forcing a trained model to forget corrupting influences. We show that the brittle nature of the classical model forms rigid, stubborn memories of erroneous data, making efficient unlearning challenging, while the quantum model is significantly more amenable to efficient forgetting with approximate unlearning methods. Our findings establish that quantum machine learning can possess a dual advantage of intrinsic resilience and efficient adaptability, providing a promising paradigm for the trustworthy and robust artificial intelligence of the future.
Similar Papers
Intrinsic preservation of plasticity in continual quantum learning
Quantum Physics
Quantum computers learn forever without forgetting.
Quantum Machine Learning via Contrastive Training
Machine Learning (CS)
Teaches computers to learn from pictures without labels.
Critical Evaluation of Quantum Machine Learning for Adversarial Robustness
Cryptography and Security
Makes quantum computers safer from hackers.