Behavior and Representation in Large Language Models for Combinatorial Optimization: From Feature Extraction to Algorithm Selection
By: Francesca Da Ros, Luca Di Gaspero, Kevin Roitero
Recent advances in Large Language Models (LLMs) have opened new perspectives for automation in optimization. While several studies have explored how LLMs can generate or solve optimization models, far less is understood about what these models actually learn regarding problem structure or algorithmic behavior. This study investigates how LLMs internally represent combinatorial optimization problems and whether such representations can support downstream decision tasks. We adopt a twofold methodology combining direct querying, which assesses LLM capacity to explicitly extract instance features, with probing analyses that examine whether such information is implicitly encoded within their hidden layers. The probing framework is further extended to a per-instance algorithm selection task, evaluating whether LLM-derived representations can predict the best-performing solver. Experiments span four benchmark problems and three instance representations. Results show that LLMs exhibit moderate ability to recover feature information from problem instances, either through direct querying or probing. Notably, the predictive power of LLM hidden-layer representations proves comparable to that achieved through traditional feature extraction, suggesting that LLMs capture meaningful structural information relevant to optimization performance.
Similar Papers
A Systematic Survey on Large Language Models for Evolutionary Optimization: From Modeling to Solving
Neural and Evolutionary Computing
Helps computers solve hard problems faster.
Teaching LLMs to Think Mathematically: A Critical Study of Decision-Making via Optimization
Artificial Intelligence
Helps computers solve math problems from words.
A Survey on Mathematical Reasoning and Optimization with Large Language Models
Artificial Intelligence
AI learns to solve math problems better.