Counterfactual Explanation for Multivariate Time Series Forecasting with Exogenous Variables
By: Keita Kinjo
Potential Business Impact:
Explains why computer predictions change.
Currently, machine learning is widely used across various domains, including time series data analysis. However, some machine learning models function as black boxes, making interpretability a critical concern. One approach to address this issue is counterfactual explanation (CE), which aims to provide insights into model predictions. This study focuses on the relatively underexplored problem of generating counterfactual explanations for time series forecasting. We propose a method for extracting CEs in time series forecasting using exogenous variables, which are frequently encountered in fields such as business and marketing. In addition, we present methods for analyzing the influence of each variable over an entire time series, generating CEs by altering only specific variables, and evaluating the quality of the resulting CEs. We validate the proposed method through theoretical analysis and empirical experiments, showcasing its accuracy and practical applicability. These contributions are expected to support real-world decision-making based on time series data analysis.
Similar Papers
Counterfactual Explainable AI (XAI) Method for Deep Learning-Based Multivariate Time Series Classification
Machine Learning (CS)
Shows why computers make certain time-based guesses.
Counterfactual Explainable AI (XAI) Method for Deep Learning-Based Multivariate Time Series Classification
Machine Learning (CS)
Shows why computer predictions are right or wrong.
GenFacts-Generative Counterfactual Explanations for Multi-Variate Time Series
Machine Learning (CS)
Shows how to change data to get different results.