CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization
By: Weiwei Sun , Shengyu Feng , Shanda Li and more
Potential Business Impact:
Helps computers solve tricky planning problems better.
Although LLM-based agents have attracted significant attention in domains such as software engineering and machine learning research, their role in advancing combinatorial optimization (CO) remains relatively underexplored. This gap underscores the need for a deeper understanding of their potential in tackling structured, constraint-intensive problems -- a pursuit currently limited by the absence of comprehensive benchmarks for systematic investigation. To address this, we introduce CO-Bench, a benchmark suite featuring 36 real-world CO problems drawn from a broad range of domains and complexity levels. CO-Bench includes structured problem formulations and curated data to support rigorous investigation of LLM agents. We evaluate multiple agentic frameworks against established human-designed algorithms, revealing the strengths and limitations of existing LLM agents and identifying promising directions for future research. CO-Bench is publicly available at https://github.com/sunnweiwei/CO-Bench.
Similar Papers
CoCo-Bench: A Comprehensive Code Benchmark For Multi-task Large Language Model Evaluation
Software Engineering
Tests computer code writing better than before.
MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents
Multiagent Systems
Tests how AI teams work together to solve problems.
OPT-BENCH: Evaluating LLM Agent on Large-Scale Search Spaces Optimization Problems
Artificial Intelligence
Helps computers learn to solve hard problems better.