Score: 0

On the Universality of Transformer Architectures; How Much Attention Is Enough?

Published: December 20, 2025 | arXiv ID: 2512.18445v1

By: Amirreza Abbasi, Mohsen Hooshmand

Transformers are crucial across many AI fields, such as large language models, computer vision, and reinforcement learning. This prominence stems from the architecture's perceived universality and scalability compared to alternatives. This work examines the problem of universality in Transformers, reviews recent progress, including architectural refinements such as structural minimality and approximation rates, and surveys state-of-the-art advances that inform both theoretical and practical understanding. Our aim is to clarify what is currently known about Transformers expressiveness, separate robust guarantees from fragile ones, and identify key directions for future theoretical research.

Category
Computer Science:
Machine Learning (CS)