CALM: Co-evolution of Algorithms and Language Model for Automatic Heuristic Design
By: Ziyao Huang , Weiwei Wu , Kui Wu and more
Potential Business Impact:
AI learns to solve hard problems faster.
Tackling complex optimization problems often relies on expert-designed heuristics, typically crafted through extensive trial and error. Recent advances demonstrate that large language models (LLMs), when integrated into well-designed evolutionary search frameworks, can autonomously discover high-performing heuristics at a fraction of the traditional cost. However, existing approaches predominantly rely on verbal guidance, i.e., manipulating the prompt generation process, to steer the evolution of heuristics, without adapting the underlying LLM. We propose a hybrid framework that combines verbal and numerical guidance, the latter achieved by fine-tuning the LLM via reinforcement learning based on the quality of generated heuristics. This joint optimization allows the LLM to co-evolve with the search process. Our method outperforms state-of-the-art (SOTA) baselines across various optimization tasks, running locally on a single 24GB GPU using a 7B model with INT4 quantization. It surpasses methods that rely solely on verbal guidance, even when those use significantly more powerful API-based models.
Similar Papers
Leveraging Large Language Models to Develop Heuristics for Emerging Optimization Problems
Artificial Intelligence
AI learns to solve tricky problems faster.
Evolutionary thoughts: integration of large language models and evolutionary algorithms
Neural and Evolutionary Computing
AI learns faster by trying many ideas.
Beyond Algorithm Evolution: An LLM-Driven Framework for the Co-Evolution of Swarm Intelligence Optimization Algorithms and Prompts
Neural and Evolutionary Computing
Helps computers find better solutions to hard problems.