Guided by Stars: Interpretable Concept Learning Over Time Series via Temporal Logic Semantics
By: Irene Ferfoglia , Simone Silvetti , Gaia Saveri and more
Potential Business Impact:
Explains why machines make decisions about time data.
Time series classification is a task of paramount importance, as this kind of data often arises in safety-critical applications. However, it is typically tackled with black-box deep learning methods, making it hard for humans to understand the rationale behind their output. To take on this challenge, we propose a novel approach, STELLE (Signal Temporal logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic framework that unifies classification and explanation through direct embedding of trajectories into a space of temporal logic concepts. By introducing a novel STL-inspired kernel that maps raw time series to their alignment with predefined STL formulae, our model jointly optimises accuracy and interpretability, as each prediction is accompanied by the most relevant logical concepts that characterise it. This yields (i) local explanations as human-readable STL conditions justifying individual predictions, and (ii) global explanations as class-characterising formulae. Experiments demonstrate that STELLE achieves competitive accuracy while providing logically faithful explanations, validated on diverse real-world benchmarks.
Similar Papers
Towards Interpretable Concept Learning over Time Series via Temporal Logic Semantics
Machine Learning (CS)
Explains why computers make decisions about time data.
Towards Interpretable Concept Learning over Time Series via Temporal Logic Semantics
Machine Learning (CS)
Explains why computers make decisions about time data.
STELLA: Guiding Large Language Models for Time Series Forecasting with Semantic Abstractions
Artificial Intelligence
Helps computers predict future events more accurately.