Score: 2

KLiNQ: Knowledge Distillation-Assisted Lightweight Neural Network for Qubit Readout on FPGA

Published: March 5, 2025 | arXiv ID: 2503.03544v1

By: Xiaorang Guo , Tigran Bunarjyan , Dai Liu and more

BigTech Affiliations: Princeton University

Potential Business Impact:

Makes quantum computers faster and more accurate.

Business Areas:
Quantum Computing Science and Engineering

Superconducting qubits are among the most promising candidates for building quantum information processors. Yet, they are often limited by slow and error-prone qubit readout -- a critical factor in achieving high-fidelity operations. While current methods, including deep neural networks, enhance readout accuracy, they typically lack support for mid-circuit measurements essential for quantum error correction, and they usually rely on large, resource-intensive network models. This paper presents KLiNQ, a novel qubit readout architecture leveraging lightweight neural networks optimized via knowledge distillation. Our approach achieves around a 99% reduction in model size compared to the baseline while maintaining a qubit-state discrimination accuracy of 91%. KLiNQ facilitates rapid, independent qubit-state readouts that enable mid-circuit measurements by assigning a dedicated, compact neural network for each qubit. Implemented on the Xilinx UltraScale+ FPGA, our design can perform the discrimination within 32ns. The results demonstrate that compressed neural networks can maintain high-fidelity independent readout while enabling efficient hardware implementation, advancing practical quantum computing.

Country of Origin
πŸ‡©πŸ‡ͺ πŸ‡ΊπŸ‡Έ United States, Germany

Page Count
7 pages

Category
Physics:
Quantum Physics