Why Do Transformers Fail to Forecast Time Series In-Context?
By: Yufa Zhou , Yixiao Wang , Surbhi Goel and more
Potential Business Impact:
Makes computers predict future events more accurately.
Time series forecasting (TSF) remains a challenging and largely unsolved problem in machine learning, despite significant recent efforts leveraging Large Language Models (LLMs), which predominantly rely on Transformer architectures. Empirical evidence consistently shows that even powerful Transformers often fail to outperform much simpler models, e.g., linear models, on TSF tasks; however, a rigorous theoretical understanding of this phenomenon remains limited. In this paper, we provide a theoretical analysis of Transformers' limitations for TSF through the lens of In-Context Learning (ICL) theory. Specifically, under AR($p$) data, we establish that: (1) Linear Self-Attention (LSA) models $\textit{cannot}$ achieve lower expected MSE than classical linear models for in-context forecasting; (2) as the context length approaches to infinity, LSA asymptotically recovers the optimal linear predictor; and (3) under Chain-of-Thought (CoT) style inference, predictions collapse to the mean exponentially. We empirically validate these findings through carefully designed experiments. Our theory not only sheds light on several previously underexplored phenomena but also offers practical insights for designing more effective forecasting architectures. We hope our work encourages the broader research community to revisit the fundamental theoretical limitations of TSF and to critically evaluate the direct application of increasingly sophisticated architectures without deeper scrutiny.
Similar Papers
In-Context and Few-Shots Learning for Forecasting Time Series Data based on Large Language Models
Machine Learning (CS)
New AI predicts future numbers better than old ways.
On the Role of Transformer Feed-Forward Layers in Nonlinear In-Context Learning
Machine Learning (CS)
Helps computers learn new things from examples.
Towards Theoretical Understanding of Transformer Test-Time Computing: Investigation on In-Context Linear Regression
Machine Learning (CS)
Makes computer writing smarter by trying many ideas.