Knowledge Distillation for Variational Quantum Convolutional Neural Networks on Heterogeneous Data
By: Kai Yu, Binbin Cai, Song Lin
Potential Business Impact:
Teaches computers to learn from different data.
Distributed quantum machine learning faces significant challenges due to heterogeneous client data and variations in local model structures, which hinder global model aggregation. To address these challenges, we propose a knowledge distillation framework for variational quantum convolutional neural networks on heterogeneous data. The framework features a quantum gate number estimation mechanism based on client data, which guides the construction of resource-adaptive VQCNN circuits. Particle swarm optimization is employed to efficiently generate personalized quantum models tailored to local data characteristics. During aggregation, a knowledge distillation strategy integrating both soft-label and hard-label supervision consolidates knowledge from heterogeneous clients using a public dataset, forming a global model while avoiding parameter exposure and privacy leakage. Theoretical analysis shows that proposed framework benefits from quantum high-dimensional representation, offering advantages over classical approaches, and minimizes communication by exchanging only model indices and test outputs. Extensive simulations on the PennyLane platform validate the effectiveness of the gate number estimation and distillation-based aggregation. Experimental results demonstrate that the aggregated global model achieves accuracy close to fully supervised centralized training. These results shown that proposed methods can effectively handle heterogeneity, reduce resource consumption, and maintain performance, highlighting its potential for scalable and privacy-preserving distributed quantum learning.
Similar Papers
Dataset Distillation for Quantum Neural Networks
Machine Learning (CS)
Makes quantum computers learn faster with less data.
A Comprehensive Survey on Knowledge Distillation
CV and Pattern Recognition
Makes big AI models run on small devices.
KLiNQ: Knowledge Distillation-Assisted Lightweight Neural Network for Qubit Readout on FPGA
Quantum Physics
Makes quantum computers faster and more accurate.