Towards Interpretable Concept Learning over Time Series via Temporal Logic Semantics
By: Irene Ferfoglia , Simone Silvetti , Gaia Saveri and more
Potential Business Impact:
Explains why computers make decisions about time data.
Time series classification is a task of paramount importance, as this kind of data often arises in safety-critical applications. However, it is typically tackled with black-box deep learning methods, making it hard for humans to understand the rationale behind their output. To take on this challenge, we propose a neuro-symbolic framework that unifies classification and explanation through direct embedding of trajectories into a space of Signal Temporal Logic (STL) concepts. By introducing a novel STL-inspired kernel that maps raw time series to their alignment with predefined STL formulae, our model jointly optimises for accuracy and interpretability, as each prediction is accompanied by the most relevant logical concepts that characterise it. This enables classification grounded in human-interpretable temporal patterns and produces both local and global symbolic explanations. Early results show competitive performance while offering high-quality logical justifications for model decisions.
Similar Papers
Towards Interpretable Concept Learning over Time Series via Temporal Logic Semantics
Machine Learning (CS)
Explains why computers make decisions about time data.
Guided by Stars: Interpretable Concept Learning Over Time Series via Temporal Logic Semantics
Machine Learning (CS)
Explains why machines make decisions about time data.
SigTime: Learning and Visually Explaining Time Series Signatures
Machine Learning (CS)
Finds hidden patterns in health data.