Efficient Model Selection for Time Series Forecasting via LLMs
By: Wang Wei , Tiankai Yang , Hongjie Chen and more
Potential Business Impact:
AI picks best computer models for predicting future.
Model selection is a critical step in time series forecasting, traditionally requiring extensive performance evaluations across various datasets. Meta-learning approaches aim to automate this process, but they typically depend on pre-constructed performance matrices, which are costly to build. In this work, we propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection. Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs. Through extensive experiments with LLaMA, GPT and Gemini, we demonstrate that our approach outperforms traditional meta-learning techniques and heuristic baselines, while significantly reducing computational overhead. These findings underscore the potential of LLMs in efficient model selection for time series forecasting.
Similar Papers
LLMs as In-Context Meta-Learners for Model and Hyperparameter Selection
Machine Learning (CS)
AI helps pick the best computer programs.
LLMs as In-Context Meta-Learners for Model and Hyperparameter Selection
Machine Learning (CS)
AI helps pick best computer programs and settings.
Large Language models for Time Series Analysis: Techniques, Applications, and Challenges
Machine Learning (CS)
Helps computers understand past events to predict future ones.