Transformer Is Inherently a Causal Learner
By: Xinyue Wang, Stephen Wang, Biwei Huang
Potential Business Impact:
Finds hidden connections in data over time.
We reveal that transformers trained in an autoregressive manner naturally encode time-delayed causal structures in their learned representations. When predicting future values in multivariate time series, the gradient sensitivities of transformer outputs with respect to past inputs directly recover the underlying causal graph, without any explicit causal objectives or structural constraints. We prove this connection theoretically under standard identifiability conditions and develop a practical extraction method using aggregated gradient attributions. On challenging cases such as nonlinear dynamics, long-term dependencies, and non-stationary systems, this approach greatly surpasses the performance of state-of-the-art discovery algorithms, especially as data heterogeneity increases, exhibiting scaling potential where causal accuracy improves with data volume and heterogeneity, a property traditional methods lack. This unifying view lays the groundwork for a future paradigm where causal discovery operates through the lens of foundation models, and foundation models gain interpretability and enhancement through the lens of causality.
Similar Papers
Transforming Causality: Transformer-Based Temporal Causal Discovery with Prior Knowledge Integration
Machine Learning (CS)
Finds true causes in messy time data.
Mechanistic Interpretability for Transformer-based Time Series Classification
Machine Learning (CS)
Shows how AI learns to predict patterns.
A Mechanistic Analysis of Transformers for Dynamical Systems
Machine Learning (CS)
Explains why computers predict the future well.