A Unified Contrastive-Generative Framework for Time Series Classification
By: Ziyu Liu , Azadeh Alavi , Minyi Li and more
Potential Business Impact:
Teaches computers to understand time patterns better.
Self-supervised learning (SSL) for multivariate time series mainly includes two paradigms: contrastive methods that excel at instance discrimination and generative approaches that model data distributions. While effective individually, their complementary potential remains unexplored. We propose a Contrastive Generative Time series framework (CoGenT), the first framework to unify these paradigms through joint contrastive-generative optimization. CoGenT addresses fundamental limitations of both approaches: it overcomes contrastive learning's sensitivity to high intra-class similarity in temporal data while reducing generative methods' dependence on large datasets. We evaluate CoGenT on six diverse time series datasets. The results show consistent improvements, with up to 59.2% and 14.27% F1 gains over standalone SimCLR and MAE, respectively. Our analysis reveals that the hybrid objective preserves discriminative power while acquiring generative robustness. These findings establish a foundation for hybrid SSL in temporal domains. We will release the code shortly.
Similar Papers
Latent Multi-view Learning for Robust Environmental Sound Representations
Sound
Helps computers understand sounds better by learning from noise.
A theoretical framework for self-supervised contrastive learning for continuous dependent data
Machine Learning (CS)
Teaches computers to understand time-based patterns.
Self-Supervised Dynamical System Representations for Physiological Time-Series
Machine Learning (CS)
Helps computers understand body signals better.