The Role of Logic and Automata in Understanding Transformers
By: Anthony W. Lin, Pablo Barcelo
Potential Business Impact:
Makes computers understand language and solve problems.
The advent of transformers has in recent years led to powerful and revolutionary Large Language Models (LLMs). Despite this, our understanding on the capability of transformers is still meager. In this invited contribution, we recount the rapid progress in the last few years to the question of what transformers can do. In particular, we will see the integral role of logic and automata (also with some help from circuit complexity) in answering this question. We also mention several open problems at the intersection of logic, automata, verification and transformers.
Similar Papers
What do language models model? Transformers, automata, and the format of thought
Computation and Language
Computers learn language like a machine, not a brain.
Three tiers of computation in transformers and in brain architectures
Computation and Language
Teaches computers to think and reason better.
LLMs for Analog Circuit Design Continuum (ACDC)
Machine Learning (CS)
Helps computers design circuits, but they make mistakes.