C-SHAP for time series: An approach to high-level temporal explanations
By: Annemarie Jutte , Faizan Ahmed , Jeroen Linssen and more
Potential Business Impact:
Explains AI's time predictions using big ideas.
Time series are ubiquitous in domains such as energy forecasting, healthcare, and industry. Using AI systems, some tasks within these domains can be efficiently handled. Explainable AI (XAI) aims to increase the reliability of AI solutions by explaining model reasoning. For time series, many XAI methods provide point- or sequence-based attribution maps. These methods explain model reasoning in terms of low-level patterns. However, they do not capture high-level patterns that may also influence model reasoning. We propose a concept-based method to provide explanations in terms of these high-level patterns. In this paper, we present C-SHAP for time series, an approach which determines the contribution of concepts to a model outcome. We provide a general definition of C-SHAP and present an example implementation using time series decomposition. Additionally, we demonstrate the effectiveness of the methodology through a use case from the energy domain.
Similar Papers
Explainable Artificial Intelligence for Economic Time Series: A Comprehensive Review and a Systematic Taxonomy of Methods and Concepts
General Economics
Helps understand why computer predictions are right.
An Empirical Evaluation of Factors Affecting SHAP Explanation of Time Series Classification
Artificial Intelligence
Makes AI explain its time predictions faster.
A Self-explainable Model of Long Time Series by Extracting Informative Structured Causal Patterns
Machine Learning (CS)
Shows how past events affect future predictions.