Score: 0

ConstraintLLM: A Neuro-Symbolic Framework for Industrial-Level Constraint Programming

Published: October 7, 2025 | arXiv ID: 2510.05774v1

By: Weichun Shi , Minghao Liu , Wanting Zhang and more

Potential Business Impact:

Helps computers solve hard problems faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Constraint programming (CP) is a crucial technology for solving real-world constraint optimization problems (COPs), with the advantages of rich modeling semantics and high solving efficiency. Using large language models (LLMs) to generate formal modeling automatically for COPs is becoming a promising approach, which aims to build trustworthy neuro-symbolic AI with the help of symbolic solvers. However, CP has received less attention compared to works based on operations research (OR) models. We introduce ConstraintLLM, the first LLM specifically designed for CP modeling, which is trained on an open-source LLM with multi-instruction supervised fine-tuning. We propose the Constraint-Aware Retrieval Module (CARM) to increase the in-context learning capabilities, which is integrated in a Tree-of-Thoughts (ToT) framework with guided self-correction mechanism. Moreover, we construct and release IndusCP, the first industrial-level benchmark for CP modeling, which contains 140 challenging tasks from various domains. Our experiments demonstrate that ConstraintLLM achieves state-of-the-art solving accuracy across multiple benchmarks and outperforms the baselines by 2x on the new IndusCP benchmark. Code and data are available at: https://github.com/william4s/ConstraintLLM.

Country of Origin
🇬🇧 United Kingdom

Page Count
21 pages

Category
Computer Science:
Artificial Intelligence