Score: 1

Guided by Stars: Interpretable Concept Learning Over Time Series via Temporal Logic Semantics

Published: November 6, 2025 | arXiv ID: 2511.04244v1

By: Irene Ferfoglia , Simone Silvetti , Gaia Saveri and more

Potential Business Impact:

Explains why machines make decisions about time data.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Time series classification is a task of paramount importance, as this kind of data often arises in safety-critical applications. However, it is typically tackled with black-box deep learning methods, making it hard for humans to understand the rationale behind their output. To take on this challenge, we propose a novel approach, STELLE (Signal Temporal logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic framework that unifies classification and explanation through direct embedding of trajectories into a space of temporal logic concepts. By introducing a novel STL-inspired kernel that maps raw time series to their alignment with predefined STL formulae, our model jointly optimises accuracy and interpretability, as each prediction is accompanied by the most relevant logical concepts that characterise it. This yields (i) local explanations as human-readable STL conditions justifying individual predictions, and (ii) global explanations as class-characterising formulae. Experiments demonstrate that STELLE achieves competitive accuracy while providing logically faithful explanations, validated on diverse real-world benchmarks.

Country of Origin
🇮🇹 Italy

Repos / Data Links

Page Count
34 pages

Category
Computer Science:
Machine Learning (CS)