Time to Embed: Unlocking Foundation Models for Time Series with Channel Descriptions
By: Utsav Dutta, Sina Khoshfetrat Pakazad, Henrik Ohlsson
Potential Business Impact:
Teaches computers to understand many kinds of time data.
Traditional time series models are task-specific and often depend on dataset-specific training and extensive feature engineering. While Transformer-based architectures have improved scalability, foundation models, commonplace in text, vision, and audio, remain under-explored for time series and are largely restricted to forecasting. We introduce $\textbf{CHARM}$, a foundation embedding model for multivariate time series that learns shared, transferable, and domain-aware representations. To address the unique difficulties of time series foundation learning, $\textbf{CHARM}$ incorporates architectural innovations that integrate channel-level textual descriptions while remaining invariant to channel order. The model is trained using a Joint Embedding Predictive Architecture (JEPA), with novel augmentation schemes and a loss function designed to improve interpretability and training stability. Our $7$M-parameter model achieves state-of-the-art performance across diverse downstream tasks, setting a new benchmark for time series representation learning.
Similar Papers
Foundation Models for Time Series: A Survey
Machine Learning (CS)
Helps computers understand patterns in data over time.
TimesBERT: A BERT-Style Foundation Model for Time Series Understanding
Machine Learning (CS)
Helps computers understand patterns in data streams.
Conversational Time Series Foundation Models: Towards Explainable and Effective Forecasting
Artificial Intelligence
AI learns to pick the best prediction tool.