Dataset Distillation for Quantum Neural Networks
By: Koustubh Phalak, Junde Li, Swaroop Ghosh
Potential Business Impact:
Makes quantum computers learn faster with less data.
Training Quantum Neural Networks (QNNs) on large amount of classical data can be both time consuming as well as expensive. Higher amount of training data would require higher number of gradient descent steps to reach convergence. This, in turn would imply that the QNN will require higher number of quantum executions, thereby driving up its overall execution cost. In this work, we propose performing the dataset distillation process for QNNs, where we use a novel quantum variant of classical LeNet model containing residual connection and trainable Hermitian observable in the Parametric Quantum Circuit (PQC) of the QNN. This approach yields highly informative yet small number of training data at similar performance as the original data. We perform distillation for MNIST and Cifar-10 datasets, and on comparison with classical models observe that both the datasets yield reasonably similar post-inferencing accuracy on quantum LeNet (91.9% MNIST, 50.3% Cifar-10) compared to classical LeNet (94% MNIST, 54% Cifar-10). We also introduce a non-trainable Hermitian for ensuring stability in the distillation process and note marginal reduction of up to 1.8% (1.3%) for MNIST (Cifar-10) dataset.
Similar Papers
Knowledge Distillation for Variational Quantum Convolutional Neural Networks on Heterogeneous Data
Quantum Physics
Teaches computers to learn from different data.
Distributed Quantum Neural Networks on Distributed Photonic Quantum Computing
Quantum Physics
Makes computers learn faster with less data.
Hybrid Quantum-Classical Learning for Multiclass Image Classification
Quantum Physics
Makes computers better at recognizing pictures.