Time Series Foundation Models: Benchmarking Challenges and Requirements
By: Marcel Meyer , Sascha Kaltenpoth , Kevin Zalipski and more
Potential Business Impact:
Tests if future predictions are truly new.
Time Series Foundation Models (TSFMs) represent a new paradigm for time series forecasting, offering zero-shot forecasting capabilities without the need for domain-specific pre-training or fine-tuning. However, as with Large Language Models (LLMs), evaluating TSFMs is tricky, as with ever more extensive training sets, it becomes more and more challenging to ensure the integrity of benchmarking data. Our investigation of existing TSFM evaluation highlights multiple challenges, ranging from the representativeness of the benchmark datasets, over the lack of spatiotemporal evaluation, to risks of information leakage due to overlapping and obscure datasets, and the memorization of global patterns caused by external shocks like economic crises or pandemics. Our findings reveal widespread confusion regarding data partitions, risking inflated performance estimates and incorrect transfer of global knowledge to local time series. We argue for the development of robust evaluation methodologies to prevent pitfalls already observed in LLM and classical time series benchmarking, and call upon the research community to design new, principled approaches, such as evaluations on truly out-of-sample future data, to safeguard the integrity of TSFM assessment.
Similar Papers
Re(Visiting) Time Series Foundation Models in Finance
Computational Finance
Teaches computers to predict stock prices better.
Time Series Foundation Models for Multivariate Financial Time Series Forecasting
General Finance
Helps predict money changes with less data.
Evaluating Time Series Foundation Models on Noisy Periodic Time Series
Machine Learning (CS)
AI struggles to predict future patterns with noise.