LLMs as In-Context Meta-Learners for Model and Hyperparameter Selection
By: Youssef Attia El Hili , Albert Thomas , Malik Tiomoko and more
Potential Business Impact:
AI helps pick best computer programs and settings.
Model and hyperparameter selection are critical but challenging in machine learning, typically requiring expert intuition or expensive automated search. We investigate whether large language models (LLMs) can act as in-context meta-learners for this task. By converting each dataset into interpretable metadata, we prompt an LLM to recommend both model families and hyperparameters. We study two prompting strategies: (1) a zero-shot mode relying solely on pretrained knowledge, and (2) a meta-informed mode augmented with examples of models and their performance on past tasks. Across synthetic and real-world benchmarks, we show that LLMs can exploit dataset metadata to recommend competitive models and hyperparameters without search, and that improvements from meta-informed prompting demonstrate their capacity for in-context meta-learning. These results highlight a promising new role for LLMs as lightweight, general-purpose assistants for model selection and hyperparameter optimization.
Similar Papers
LLMs as In-Context Meta-Learners for Model and Hyperparameter Selection
Machine Learning (CS)
AI helps pick the best computer programs.
Efficient Model Selection for Time Series Forecasting via LLMs
Machine Learning (CS)
AI picks best computer models for predicting future.
Just Because You Can, Doesn't Mean You Should: LLMs for Data Fitting
Machine Learning (CS)
Computers change answers if you rename data.