Score: 3

Time to Embed: Unlocking Foundation Models for Time Series with Channel Descriptions

Published: May 20, 2025 | arXiv ID: 2505.14543v1

By: Utsav Dutta, Sina Khoshfetrat Pakazad, Henrik Ohlsson

BigTech Affiliations: C3.ai

Potential Business Impact:

Teaches computers to understand many kinds of time data.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Traditional time series models are task-specific and often depend on dataset-specific training and extensive feature engineering. While Transformer-based architectures have improved scalability, foundation models, commonplace in text, vision, and audio, remain under-explored for time series and are largely restricted to forecasting. We introduce $\textbf{CHARM}$, a foundation embedding model for multivariate time series that learns shared, transferable, and domain-aware representations. To address the unique difficulties of time series foundation learning, $\textbf{CHARM}$ incorporates architectural innovations that integrate channel-level textual descriptions while remaining invariant to channel order. The model is trained using a Joint Embedding Predictive Architecture (JEPA), with novel augmentation schemes and a loss function designed to improve interpretability and training stability. Our $7$M-parameter model achieves state-of-the-art performance across diverse downstream tasks, setting a new benchmark for time series representation learning.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
Machine Learning (CS)