Physical models realizing the transformer architecture of large language models
By: Zeqian Chen
Potential Business Impact:
Makes computers understand words like humans do.
The introduction of the transformer architecture in 2017 marked the most striking advancement in natural language processing. The transformer is a model architecture relying entirely on an attention mechanism to draw global dependencies between input and output. However, we believe there is a gap in our theoretical understanding of what the transformer is, and how it works physically. From a physical perspective on modern chips, such as those chips under 28nm, modern intelligent machines should be regarded as open quantum systems beyond conventional statistical systems. Thereby, in this paper, we construct physical models realizing large language models based on a transformer architecture as open quantum systems in the Fock space over the Hilbert space of tokens. Our physical models underlie the transformer architecture for large language models.
Similar Papers
Physical Transformer
Machine Learning (CS)
AI learns to move and interact with the real world.
A Mathematical Explanation of Transformers for Large Language Models and GPTs
Machine Learning (CS)
Explains how AI learns by seeing patterns.
Advancements in Natural Language Processing: Exploring Transformer-Based Architectures for Text Understanding
Computation and Language
Computers now understand and write like people.