Pre-trained Forecasting Models: Strong Zero-Shot Feature Extractors for Time Series Classification
By: Andreas Auer , Daniel Klotz , Sebastinan Böck and more
Potential Business Impact:
Teaches computers to understand patterns in data.
Recent research on time series foundation models has primarily focused on forecasting, leaving it unclear how generalizable their learned representations are. In this study, we examine whether frozen pre-trained forecasting models can provide effective representations for classification. To this end, we compare different representation extraction strategies and introduce two model-agnostic embedding augmentations. Our experiments show that the best forecasting models achieve classification accuracy that matches or even surpasses that of state-of-the-art models pre-trained specifically for classification. Moreover, we observe a positive correlation between forecasting and classification performance. These findings challenge the assumption that task-specific pre-training is necessary, and suggest that learning to forecast may provide a powerful route toward constructing general-purpose time series foundation models.
Similar Papers
Modèles de Fondation et Ajustement : Vers une Nouvelle Génération de Modèles pour la Prévision des Séries Temporelles
Machine Learning (CS)
Predicts future events with new data.
Re(Visiting) Time Series Foundation Models in Finance
Computational Finance
Teaches computers to predict stock prices better.
One-Embedding-Fits-All: Efficient Zero-Shot Time Series Forecasting by a Model Zoo
Machine Learning (CS)
Smartly picks best AI for future guesses.