LUNA: LUT-Based Neural Architecture for Fast and Low-Cost Qubit Readout
By: M. A. Farooq , G. Di Guglielmo , A. Rajagopala and more
Potential Business Impact:
Makes quantum computers read results faster and smaller.
Qubit readout is a critical operation in quantum computing systems, which maps the analog response of qubits into discrete classical states. Deep neural networks (DNNs) have recently emerged as a promising solution to improve readout accuracy . Prior hardware implementations of DNN-based readout are resource-intensive and suffer from high inference latency, limiting their practical use in low-latency decoding and quantum error correction (QEC) loops. This paper proposes LUNA, a fast and efficient superconducting qubit readout accelerator that combines low-cost integrator-based preprocessing with Look-Up Table (LUT) based neural networks for classification. The architecture uses simple integrators for dimensionality reduction with minimal hardware overhead, and employs LogicNets (DNNs synthesized into LUT logic) to drastically reduce resource usage while enabling ultra-low-latency inference. We integrate this with a differential evolution based exploration and optimization framework to identify high-quality design points. Our results show up to a 10.95x reduction in area and 30% lower latency with little to no loss in fidelity compared to the state-of-the-art. LUNA enables scalable, low-footprint, and high-speed qubit readout, supporting the development of larger and more reliable quantum computing systems.
Similar Papers
KLiNQ: Knowledge Distillation-Assisted Lightweight Neural Network for Qubit Readout on FPGA
Quantum Physics
Makes quantum computers faster and more accurate.
A Survey on LUT-based Deep Neural Networks Implemented in FPGAs
Hardware Architecture
Makes smart devices run AI faster and cheaper.
LUT-LLM: Efficient Large Language Model Inference with Memory-based Computations on FPGAs
Hardware Architecture
Makes AI run faster and use less power.