SwiftTS: A Swift Selection Framework for Time Series Pre-trained Models via Multi-task Meta-Learning
By: Tengxue Zhang , Biao Ouyang , Yang Shu and more
Potential Business Impact:
Finds best AI for time predictions faster.
Pre-trained models exhibit strong generalization to various downstream tasks. However, given the numerous models available in the model hub, identifying the most suitable one by individually fine-tuning is time-consuming. In this paper, we propose \textbf{SwiftTS}, a swift selection framework for time series pre-trained models. To avoid expensive forward propagation through all candidates, SwiftTS adopts a learning-guided approach that leverages historical dataset-model performance pairs across diverse horizons to predict model performance on unseen datasets. It employs a lightweight dual-encoder architecture that embeds time series and candidate models with rich characteristics, computing patchwise compatibility scores between data and model embeddings for efficient selection. To further enhance the generalization across datasets and horizons, we introduce a horizon-adaptive expert composition module that dynamically adjusts expert weights, and the transferable cross-task learning with cross-dataset and cross-horizon task sampling to enhance out-of-distribution (OOD) robustness. Extensive experiments on 14 downstream datasets and 8 pre-trained models demonstrate that SwiftTS achieves state-of-the-art performance in time series pre-trained model selection.
Similar Papers
TS2Vec-Ensemble: An Enhanced Self-Supervised Framework for Time Series Forecasting
Machine Learning (CS)
Predicts future events better by combining learned patterns and cycles.
SWIFT: Mapping Sub-series with Wavelet Decomposition Improves Time Series Forecasting
Machine Learning (CS)
Predicts future events accurately on small devices.
SynTSBench: Rethinking Temporal Pattern Learning in Deep Learning Models for Time Series
Machine Learning (CS)
Tests computer predictions to find best ones.