KLiNQ: Knowledge Distillation-Assisted Lightweight Neural Network for Qubit Readout on FPGA
By: Xiaorang Guo , Tigran Bunarjyan , Dai Liu and more
Potential Business Impact:
Makes quantum computers faster and more accurate.
Superconducting qubits are among the most promising candidates for building quantum information processors. Yet, they are often limited by slow and error-prone qubit readout -- a critical factor in achieving high-fidelity operations. While current methods, including deep neural networks, enhance readout accuracy, they typically lack support for mid-circuit measurements essential for quantum error correction, and they usually rely on large, resource-intensive network models. This paper presents KLiNQ, a novel qubit readout architecture leveraging lightweight neural networks optimized via knowledge distillation. Our approach achieves around a 99% reduction in model size compared to the baseline while maintaining a qubit-state discrimination accuracy of 91%. KLiNQ facilitates rapid, independent qubit-state readouts that enable mid-circuit measurements by assigning a dedicated, compact neural network for each qubit. Implemented on the Xilinx UltraScale+ FPGA, our design can perform the discrimination within 32ns. The results demonstrate that compressed neural networks can maintain high-fidelity independent readout while enabling efficient hardware implementation, advancing practical quantum computing.
Similar Papers
Superconducting Qubit Readout Using Next-Generation Reservoir Computing
Quantum Physics
Makes quantum computers read qubit states faster.
Knowledge Distillation for Variational Quantum Convolutional Neural Networks on Heterogeneous Data
Quantum Physics
Teaches computers to learn from different data.
Q-Fusion: Diffusing Quantum Circuits
Machine Learning (CS)
Creates new computer programs for faster problem-solving.