Score: 0

Large Language Model Scaling Laws for Neural Quantum States in Quantum Chemistry

Published: September 16, 2025 | arXiv ID: 2509.12679v1

By: Oliver Knitter , Dan Zhao , Stefan Leichenauer and more

Potential Business Impact:

Makes quantum computers learn faster and better.

Business Areas:
Quantum Computing Science and Engineering

Scaling laws have been used to describe how large language model (LLM) performance scales with model size, training data size, or amount of computational resources. Motivated by the fact that neural quantum states (NQS) has increasingly adopted LLM-based components, we seek to understand NQS scaling laws, thereby shedding light on the scalability and optimal performance--resource trade-offs of NQS ansatze. In particular, we identify scaling laws that predict the performance, as measured by absolute error and V-score, for transformer-based NQS as a function of problem size in second-quantized quantum chemistry applications. By performing analogous compute-constrained optimization of the obtained parametric curves, we find that the relationship between model size and training time is highly dependent on loss metric and ansatz, and does not follow the approximately linear relationship found for language models.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)