QuanBench: Benchmarking Quantum Code Generation with Large Language Models
By: Xiaoyu Guo, Minggu Wang, Jianjun Zhao
Potential Business Impact:
Tests how well computers write quantum computer code.
Large language models (LLMs) have demonstrated good performance in general code generation; however, their capabilities in quantum code generation remain insufficiently studied. This paper presents QuanBench, a benchmark for evaluating LLMs on quantum code generation. QuanBench includes 44 programming tasks that cover quantum algorithms, state preparation, gate decomposition, and quantum machine learning. Each task has an executable canonical solution and is evaluated by functional correctness (Pass@K) and quantum semantic equivalence (Process Fidelity). We evaluate several recent LLMs, including general-purpose and code-specialized models. The results show that current LLMs have limited capability in generating the correct quantum code, with overall accuracy below 40% and frequent semantic errors. We also analyze common failure cases, such as outdated API usage, circuit construction errors, and incorrect algorithm logic. QuanBench provides a basis for future work on improving quantum code generation with LLMs.
Similar Papers
QCBench: Evaluating Large Language Models on Domain-Specific Quantitative Chemistry
Artificial Intelligence
Tests if computers can do math for chemistry.
QCoder Benchmark: Bridging Language Generation and Quantum Hardware through Simulator-Based Feedback
Computation and Language
Helps computers write code for quantum machines.
QCoder Benchmark: Bridging Language Generation and Quantum Hardware through Simulator-Based Feedback
Computation and Language
Teaches computers to write code for quantum machines.