Score: 0

Physical models realizing the transformer architecture of large language models

Published: May 21, 2025 | arXiv ID: 2507.13354v2

By: Zeqian Chen

Potential Business Impact:

Makes computers understand words like humans do.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The introduction of the transformer architecture in 2017 marked the most striking advancement in natural language processing. The transformer is a model architecture relying entirely on an attention mechanism to draw global dependencies between input and output. However, we believe there is a gap in our theoretical understanding of what the transformer is, and how it works physically. From a physical perspective on modern chips, such as those chips under 28nm, modern intelligent machines should be regarded as open quantum systems beyond conventional statistical systems. Thereby, in this paper, we construct physical models realizing large language models based on a transformer architecture as open quantum systems in the Fock space over the Hilbert space of tokens. Our physical models underlie the transformer architecture for large language models.

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)