Counterfactual Explainable AI (XAI) Method for Deep Learning-Based Multivariate Time Series Classification
By: Alan G. Paredes Cetina , Kaouther Benguessoum , Raoni Lourenço and more
Potential Business Impact:
Shows why computers make certain time-based guesses.
Recent advances in deep learning have improved multivariate time series (MTS) classification and regression by capturing complex patterns, but their lack of transparency hinders decision-making. Explainable AI (XAI) methods offer partial insights, yet often fall short of conveying the full decision space. Counterfactual Explanations (CE) provide a promising alternative, but current approaches typically prioritize either accuracy, proximity or sparsity -- rarely all -- limiting their practical value. To address this, we propose CONFETTI, a novel multi-objective CE method for MTS. CONFETTI identifies key MTS subsequences, locates a counterfactual target, and optimally modifies the time series to balance prediction confidence, proximity and sparsity. This method provides actionable insights with minimal changes, improving interpretability, and decision support. CONFETTI is evaluated on seven MTS datasets from the UEA archive, demonstrating its effectiveness in various domains. CONFETTI consistently outperforms state-of-the-art CE methods in its optimization objectives, and in six other metrics from the literature, achieving $\geq10\%$ higher confidence while improving sparsity in $\geq40\%$.
Similar Papers
Counterfactual Explainable AI (XAI) Method for Deep Learning-Based Multivariate Time Series Classification
Machine Learning (CS)
Shows why computer predictions are right or wrong.
Counterfactual Explanation for Multivariate Time Series Forecasting with Exogenous Variables
Machine Learning (CS)
Explains why computer predictions change.
Actionable and diverse counterfactual explanations incorporating domain knowledge and causal constraints
Artificial Intelligence
Makes AI suggestions practical and believable.