Score: 0

ShapeX: Shapelet-Driven Post Hoc Explanations for Time Series Classification Models

Published: October 23, 2025 | arXiv ID: 2510.20084v1

By: Bosong Huang , Ming Jin , Yuxuan Liang and more

Potential Business Impact:

Shows why time-based computer guesses are right.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Explaining time series classification models is crucial, particularly in high-stakes applications such as healthcare and finance, where transparency and trust play a critical role. Although numerous time series classification methods have identified key subsequences, known as shapelets, as core features for achieving state-of-the-art performance and validating their pivotal role in classification outcomes, existing post-hoc time series explanation (PHTSE) methods primarily focus on timestep-level feature attribution. These explanation methods overlook the fundamental prior that classification outcomes are predominantly driven by key shapelets. To bridge this gap, we present ShapeX, an innovative framework that segments time series into meaningful shapelet-driven segments and employs Shapley values to assess their saliency. At the core of ShapeX lies the Shapelet Describe-and-Detect (SDD) framework, which effectively learns a diverse set of shapelets essential for classification. We further demonstrate that ShapeX produces explanations which reveal causal relationships instead of just correlations, owing to the atomicity properties of shapelets. Experimental results on both synthetic and real-world datasets demonstrate that ShapeX outperforms existing methods in identifying the most relevant subsequences, enhancing both the precision and causal fidelity of time series explanations.

Country of Origin
🇦🇺 Australia

Page Count
32 pages

Category
Computer Science:
Machine Learning (CS)