TriShGAN: Enhancing Sparsity and Robustness in Multivariate Time Series Counterfactuals Explanation
By: Hongnan Ma , Yiwei Shi , Guanxiong Sun and more
Potential Business Impact:
Makes AI decisions more understandable and reliable.
In decision-making processes, stakeholders often rely on counterfactual explanations, which provide suggestions about what should be changed in the queried instance to alter the outcome of an AI system. However, generating these explanations for multivariate time series presents challenges due to their complex, multi-dimensional nature. Traditional Nearest Unlike Neighbor-based methods typically substitute subsequences in a queried time series with influential subsequences from an NUN, which is not always realistic in real-world scenarios due to the rigid direct substitution. Counterfactual with Residual Generative Adversarial Networks-based methods aim to address this by learning from the distribution of observed data to generate synthetic counterfactual explanations. However, these methods primarily focus on minimizing the cost from the queried time series to the counterfactual explanations and often neglect the importance of distancing the counterfactual explanation from the decision boundary. This oversight can result in explanations that no longer qualify as counterfactual if minor changes occur within the model. To generate a more robust counterfactual explanation, we introduce TriShGAN, under the CounteRGAN framework enhanced by the incorporation of triplet loss. This unsupervised learning approach uses distance metric learning to encourage the counterfactual explanations not only to remain close to the queried time series but also to capture the feature distribution of the instance with the desired outcome, thereby achieving a better balance between minimal cost and robustness. Additionally, we integrate a Shapelet Extractor that strategically selects the most discriminative parts of the high-dimensional queried time series to enhance the sparsity of counterfactual explanation and efficiency of the training process.
Similar Papers
GenFacts-Generative Counterfactual Explanations for Multi-Variate Time Series
Machine Learning (CS)
Shows how to change data to get different results.
From Prototypes to Sparse ECG Explanations: SHAP-Driven Counterfactuals for Multivariate Time-Series Multi-class Classification
Machine Learning (CS)
Explains heart monitor results by showing what to change.
Counterfactual Explainable AI (XAI) Method for Deep Learning-Based Multivariate Time Series Classification
Machine Learning (CS)
Shows why computers make certain time-based guesses.