Combinatorial Optimization via LLM-driven Iterated Fine-tuning
By: Pranjal Awasthi , Sreenivas Gollapudi , Ravi Kumar and more
Potential Business Impact:
Helps computers solve hard problems better.
We present a novel way to integrate flexible, context-dependent constraints into combinatorial optimization by leveraging Large Language Models (LLMs) alongside traditional algorithms. Although LLMs excel at interpreting nuanced, locally specified requirements, they struggle with enforcing global combinatorial feasibility. To bridge this gap, we propose an iterated fine-tuning framework where algorithmic feedback progressively refines the LLM's output distribution. Interpreting this as simulated annealing, we introduce a formal model based on a "coarse learnability" assumption, providing sample complexity bounds for convergence. Empirical evaluations on scheduling, graph connectivity, and clustering tasks demonstrate that our framework balances the flexibility of locally expressed constraints with rigorous global optimization more effectively compared to baseline sampling methods. Our results highlight a promising direction for hybrid AI-driven combinatorial reasoning.
Similar Papers
Large Language Models as End-to-end Combinatorial Optimization Solvers
Artificial Intelligence
Computers solve hard problems using just words.
Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms
Artificial Intelligence
Makes computer programs run faster and better.
Constraint-Compliant Network Optimization through Large Language Models
Networking and Internet Architecture
Makes computer networks follow rules perfectly.