Score: 0

Towards Interpretable Concept Learning over Time Series via Temporal Logic Semantics

Published: August 5, 2025 | arXiv ID: 2508.03269v2

By: Irene Ferfoglia , Simone Silvetti , Gaia Saveri and more

Potential Business Impact:

Explains why computers make decisions about time data.

Time series classification is a task of paramount importance, as this kind of data often arises in safety-critical applications. However, it is typically tackled with black-box deep learning methods, making it hard for humans to understand the rationale behind their output. To take on this challenge, we propose a neuro-symbolic framework that unifies classification and explanation through direct embedding of trajectories into a space of Signal Temporal Logic (STL) concepts. By introducing a novel STL-inspired kernel that maps raw time series to their alignment with predefined STL formulae, our model jointly optimises for accuracy and interpretability, as each prediction is accompanied by the most relevant logical concepts that characterise it. This enables classification grounded in human-interpretable temporal patterns and produces both local and global symbolic explanations. Early results show competitive performance while offering high-quality logical justifications for model decisions.

Country of Origin
🇮🇹 Italy

Page Count
3 pages

Category
Computer Science:
Machine Learning (CS)