Score: 1

When Words Change the Model: Sensitivity of LLMs for Constraint Programming Modelling

Published: November 18, 2025 | arXiv ID: 2511.14334v1

By: Alessio Pellegrino, Jacopo Mauro

Potential Business Impact:

Computers struggle to solve problems with new words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

One of the long-standing goals in optimisation and constraint programming is to describe a problem in natural language and automatically obtain an executable, efficient model. Large language models appear to bring this vision closer, showing impressive results in automatically generating models for classical benchmarks. However, much of this apparent success may derive from data contamination rather than genuine reasoning: many standard CP problems are likely included in the training data of these models. To examine this hypothesis, we systematically rephrased and perturbed a set of well-known CSPLib problems to preserve their structure while modifying their context and introducing misleading elements. We then compared the models produced by three representative LLMs across original and modified descriptions. Our qualitative analysis shows that while LLMs can produce syntactically valid and semantically plausible models, their performance drops sharply under contextual and linguistic variation, revealing shallow understanding and sensitivity to wording.

Country of Origin
🇩🇰 Denmark

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence