MTS-DMAE: Dual-Masked Autoencoder for Unsupervised Multivariate Time Series Representation Learning
By: Yi Xu, Yitian Zhang, Yun Fu
Potential Business Impact:
Teaches computers to understand time data without examples.
Unsupervised multivariate time series (MTS) representation learning aims to extract compact and informative representations from raw sequences without relying on labels, enabling efficient transfer to diverse downstream tasks. In this paper, we propose Dual-Masked Autoencoder (DMAE), a novel masked time-series modeling framework for unsupervised MTS representation learning. DMAE formulates two complementary pretext tasks: (1) reconstructing masked values based on visible attributes, and (2) estimating latent representations of masked features, guided by a teacher encoder. To further improve representation quality, we introduce a feature-level alignment constraint that encourages the predicted latent representations to align with the teacher's outputs. By jointly optimizing these objectives, DMAE learns temporally coherent and semantically rich representations. Comprehensive evaluations across classification, regression, and forecasting tasks demonstrate that our approach achieves consistent and superior performance over competitive baselines.
Similar Papers
LV-MAE: Learning Long Video Representations through Masked-Embedding Autoencoders
CV and Pattern Recognition
Helps computers understand long videos better.
MoCA: Multi-modal Cross-masked Autoencoder for Digital Health Measurements
Machine Learning (Stat)
Helps smartwatches learn from your body.
Mask the Redundancy: Evolving Masking Representation Learning for Multivariate Time-Series Clustering
Machine Learning (CS)
Finds important moments in data for better grouping.