Score: 2

Combinatorial Optimization via LLM-driven Iterated Fine-tuning

Published: March 10, 2025 | arXiv ID: 2503.06917v2

By: Pranjal Awasthi , Sreenivas Gollapudi , Ravi Kumar and more

BigTech Affiliations: Google

Potential Business Impact:

Helps computers solve hard problems better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present a novel way to integrate flexible, context-dependent constraints into combinatorial optimization by leveraging Large Language Models (LLMs) alongside traditional algorithms. Although LLMs excel at interpreting nuanced, locally specified requirements, they struggle with enforcing global combinatorial feasibility. To bridge this gap, we propose an iterated fine-tuning framework where algorithmic feedback progressively refines the LLM's output distribution. Interpreting this as simulated annealing, we introduce a formal model based on a "coarse learnability" assumption, providing sample complexity bounds for convergence. Empirical evaluations on scheduling, graph connectivity, and clustering tasks demonstrate that our framework balances the flexibility of locally expressed constraints with rigorous global optimization more effectively compared to baseline sampling methods. Our results highlight a promising direction for hybrid AI-driven combinatorial reasoning.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)