Score: 1

Repetitive Contrastive Learning Enhances Mamba's Selectivity in Time Series Prediction

Published: April 12, 2025 | arXiv ID: 2504.09185v1

By: Wenbo Yan, Hanzhong Cao, Ying Tan

Potential Business Impact:

Makes predictions better by focusing on important moments.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Long sequence prediction is a key challenge in time series forecasting. While Mamba-based models have shown strong performance due to their sequence selection capabilities, they still struggle with insufficient focus on critical time steps and incomplete noise suppression, caused by limited selective abilities. To address this, we introduce Repetitive Contrastive Learning (RCL), a token-level contrastive pretraining framework aimed at enhancing Mamba's selective capabilities. RCL pretrains a single Mamba block to strengthen its selective abilities and then transfers these pretrained parameters to initialize Mamba blocks in various backbone models, improving their temporal prediction performance. RCL uses sequence augmentation with Gaussian noise and applies inter-sequence and intra-sequence contrastive learning to help the Mamba module prioritize information-rich time steps while ignoring noisy ones. Extensive experiments show that RCL consistently boosts the performance of backbone models, surpassing existing methods and achieving state-of-the-art results. Additionally, we propose two metrics to quantify Mamba's selective capabilities, providing theoretical, qualitative, and quantitative evidence for the improvements brought by RCL.

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)