A Self-explainable Model of Long Time Series by Extracting Informative Structured Causal Patterns
By: Ziqian Wang, Yuxiao Cheng, Jinli Suo
Potential Business Impact:
Shows how past events affect future predictions.
Explainability is essential for neural networks that model long time series, yet most existing explainable AI methods only produce point-wise importance scores and fail to capture temporal structures such as trends, cycles, and regime changes. This limitation weakens human interpretability and trust in long-horizon models. To address these issues, we identify four key requirements for interpretable time-series modeling: temporal continuity, pattern-centric explanation, causal disentanglement, and faithfulness to the model's inference process. We propose EXCAP, a unified framework that satisfies all four requirements. EXCAP combines an attention-based segmenter that extracts coherent temporal patterns, a causally structured decoder guided by a pre-trained causal graph, and a latent aggregation mechanism that enforces representation stability. Our theoretical analysis shows that EXCAP provides smooth and stable explanations over time and is robust to perturbations in causal masks. Extensive experiments on classification and forecasting benchmarks demonstrate that EXCAP achieves strong predictive accuracy while generating coherent and causally grounded explanations. These results show that EXCAP offers a principled and scalable approach to interpretable modeling of long time series with relevance to high-stakes domains such as healthcare and finance.
Similar Papers
C-SHAP for time series: An approach to high-level temporal explanations
Artificial Intelligence
Explains AI's time predictions using big ideas.
Mechanistic Interpretability for Transformer-based Time Series Classification
Machine Learning (CS)
Shows how AI learns to predict patterns.
Towards Explainable and Reliable AI in Finance
Machine Learning (CS)
Makes money predictions trustworthy and understandable.