Score: 0

From Text to Time? Rethinking the Effectiveness of the Large Language Model for Time Series Forecasting

Published: April 9, 2025 | arXiv ID: 2504.08818v1

By: Xinyu Zhang, Shanshan Feng, Xutao Li

Potential Business Impact:

Helps computers predict future events better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Using pre-trained large language models (LLMs) as the backbone for time series prediction has recently gained significant research interest. However, the effectiveness of LLM backbones in this domain remains a topic of debate. Based on thorough empirical analyses, we observe that training and testing LLM-based models on small datasets often leads to the Encoder and Decoder becoming overly adapted to the dataset, thereby obscuring the true predictive capabilities of the LLM backbone. To investigate the genuine potential of LLMs in time series prediction, we introduce three pre-training models with identical architectures but different pre-training strategies. Thereby, large-scale pre-training allows us to create unbiased Encoder and Decoder components tailored to the LLM backbone. Through controlled experiments, we evaluate the zero-shot and few-shot prediction performance of the LLM, offering insights into its capabilities. Extensive experiments reveal that although the LLM backbone demonstrates some promise, its forecasting performance is limited. Our source code is publicly available in the anonymous repository: https://anonymous.4open.science/r/LLM4TS-0B5C.

Country of Origin
🇨🇳 China

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)