Multimodal Conditioned Diffusive Time Series Forecasting
By: Chen Su, Yuanhe Tian, Yan Song
Potential Business Impact:
Predicts future events using time and text clues.
Diffusion models achieve remarkable success in processing images and text, and have been extended to special domains such as time series forecasting (TSF). Existing diffusion-based approaches for TSF primarily focus on modeling single-modality numerical sequences, overlooking the rich multimodal information in time series data. To effectively leverage such information for prediction, we propose a multimodal conditioned diffusion model for TSF, namely, MCD-TSF, to jointly utilize timestamps and texts as extra guidance for time series modeling, especially for forecasting. Specifically, Timestamps are combined with time series to establish temporal and semantic correlations among different data points when aggregating information along the temporal dimension. Texts serve as supplementary descriptions of time series' history, and adaptively aligned with data points as well as dynamically controlled in a classifier-free manner. Extensive experiments on real-world benchmark datasets across eight domains demonstrate that the proposed MCD-TSF model achieves state-of-the-art performance.
Similar Papers
UniDiff: A Unified Diffusion Framework for Multimodal Time Series Forecasting
Machine Learning (CS)
Predicts future events using text and time.
Text Reinforcement for Multimodal Time Series Forecasting
Computation and Language
Makes predictions better by improving text.
Dual-Forecaster: A Multimodal Time Series Model Integrating Descriptive and Predictive Texts
Machine Learning (CS)
Helps predict future using past words and numbers.