LLM Agent for Hyper-Parameter Optimization
By: Wanzhe Wang , Jianqiu Peng , Menghao Hu and more
Potential Business Impact:
AI learns best settings for flying robots.
Hyper-parameters are essential and critical for the performance of communication algorithms. However, current hyper-parameters optimization approaches for Warm-Start Particles Swarm Optimization with Crossover and Mutation (WS-PSO-CM) algorithm, designed for radio map-enabled unmanned aerial vehicle (UAV) trajectory and communication, are primarily heuristic-based, exhibiting low levels of automation and improvable performance. In this paper, we design an Large Language Model (LLM) agent for automatic hyper-parameters-tuning, where an iterative framework and Model Context Protocol (MCP) are applied. In particular, the LLM agent is first set up via a profile, which specifies the boundary of hyper-parameters, task objective, terminal condition, conservative or aggressive strategy of optimizing hyper-parameters, and LLM configurations. Then, the LLM agent iteratively invokes WS-PSO-CM algorithm for exploration. Finally, the LLM agent exits the loop based on the terminal condition and returns an optimized set of hyperparameters. Our experiment results show that the minimal sum-rate achieved by hyper-parameters generated via our LLM agent is significantly higher than those by both human heuristics and random generation methods. This indicates that an LLM agent with PSO and WS-PSO-CM algorithm knowledge is useful in seeking high-performance hyper-parameters.
Similar Papers
Large Language Model Enhanced Particle Swarm Optimization for Hyperparameter Tuning for Deep Learning Models
Artificial Intelligence
Makes smart computer programs learn faster.
Large Language Models as Particle Swarm Optimizers
Neural and Evolutionary Computing
Helps computers solve tricky problems using smart language.
A Survey on the Optimization of Large Language Model-based Agents
Artificial Intelligence
Makes smart computer helpers plan better.