Score: 0

How Can Quantum Deep Learning Improve Large Language Models?

Published: September 17, 2025 | arXiv ID: 2509.16244v1

By: Emily Jimin Roh , Hyojun Ahn , Samuel Yen-Chi Chen and more

Potential Business Impact:

Makes AI learn new things much faster and cheaper.

Business Areas:
Quantum Computing Science and Engineering

The rapid progress of large language models (LLMs) has transformed natural language processing, yet the challenge of efficient adaptation remains unresolved. Full fine-tuning achieves strong performance but imposes prohibitive computational and memory costs. Parameter-efficient fine-tuning (PEFT) strategies, such as low-rank adaptation (LoRA), Prefix tuning, and sparse low-rank adaptation (SoRA), address this issue by reducing trainable parameters while maintaining competitive accuracy. However, these methods often encounter limitations in scalability, stability, and generalization across diverse tasks. Recent advances in quantum deep learning introduce novel opportunities through quantum-inspired encoding and parameterized quantum circuits (PQCs). In particular, the quantum-amplitude embedded adaptation (QAA) framework demonstrates expressive model updates with minimal overhead. This paper presents a systematic survey and comparative analysis of conventional PEFT methods and QAA. The analysis demonstrates trade-offs in convergence, efficiency, and representational capacity, while providing insight into the potential of quantum approaches for future LLM adaptation.

Page Count
5 pages

Category
Physics:
Quantum Physics