Transformers for Learning on Noisy and Task-Level Manifolds: Approximation and Generalization Insights
By: Zhaiming Shen , Alex Havrilla , Rongjie Lai and more
Potential Business Impact:
Makes AI learn better from messy information.
Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SORA and their successors. Empirical studies have demonstrated that real-world data and learning tasks exhibit low-dimensional structures, along with some noise or measurement error. The performance of transformers tends to depend on the intrinsic dimension of the data/tasks, though theoretical understandings remain largely unexplored for transformers. This work establishes a theoretical foundation by analyzing the performance of transformers for regression tasks involving noisy input data on a manifold. Specifically, the input data are in a tubular neighborhood of a manifold, while the ground truth function depends on the projection of the noisy data onto the manifold. We prove approximation and generalization errors which crucially depend on the intrinsic dimension of the manifold. Our results demonstrate that transformers can leverage low-complexity structures in learning task even when the input data are perturbed by high-dimensional noise. Our novel proof technique constructs representations of basic arithmetic operations by transformers, which may hold independent interest.
Similar Papers
Transformers Meet In-Context Learning: A Universal Approximation Theory
Machine Learning (CS)
Teaches computers to learn new things instantly.
Transformers as Unsupervised Learning Algorithms: A study on Gaussian Mixtures
Machine Learning (CS)
Teaches computers to learn without examples.
Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective
Machine Learning (CS)
Makes AI understand complex patterns better and faster.