Transformers as Unsupervised Learning Algorithms: A study on Gaussian Mixtures
By: Zhiheng Chen, Ruofan Wu, Guanhua Fang
Potential Business Impact:
Teaches computers to learn without examples.
The transformer architecture has demonstrated remarkable capabilities in modern artificial intelligence, among which the capability of implicitly learning an internal model during inference time is widely believed to play a key role in the under standing of pre-trained large language models. However, most recent works have been focusing on studying supervised learning topics such as in-context learning, leaving the field of unsupervised learning largely unexplored. This paper investigates the capabilities of transformers in solving Gaussian Mixture Models (GMMs), a fundamental unsupervised learning problem through the lens of statistical estimation. We propose a transformer-based learning framework called TGMM that simultaneously learns to solve multiple GMM tasks using a shared transformer backbone. The learned models are empirically demonstrated to effectively mitigate the limitations of classical methods such as Expectation-Maximization (EM) or spectral algorithms, at the same time exhibit reasonable robustness to distribution shifts. Theoretically, we prove that transformers can approximate both the EM algorithm and a core component of spectral methods (cubic tensor power iterations). These results bridge the gap between practical success and theoretical understanding, positioning transformers as versatile tools for unsupervised learning.
Similar Papers
Transformers for Learning on Noisy and Task-Level Manifolds: Approximation and Generalization Insights
Machine Learning (CS)
Makes AI learn better from messy information.
Rethinking Nonlinearity: Trainable Gaussian Mixture Modules for Modern Neural Architectures
Machine Learning (CS)
Makes computer brains learn better and faster.
Gaussian mixture models as a proxy for interacting language models
Computation and Language
Models learn how people act by talking to each other.