When Words Change the Model: Sensitivity of LLMs for Constraint Programming Modelling
By: Alessio Pellegrino, Jacopo Mauro
Potential Business Impact:
Computers struggle to solve problems with new words.
One of the long-standing goals in optimisation and constraint programming is to describe a problem in natural language and automatically obtain an executable, efficient model. Large language models appear to bring this vision closer, showing impressive results in automatically generating models for classical benchmarks. However, much of this apparent success may derive from data contamination rather than genuine reasoning: many standard CP problems are likely included in the training data of these models. To examine this hypothesis, we systematically rephrased and perturbed a set of well-known CSPLib problems to preserve their structure while modifying their context and introducing misleading elements. We then compared the models produced by three representative LLMs across original and modified descriptions. Our qualitative analysis shows that while LLMs can produce syntactically valid and semantically plausible models, their performance drops sharply under contextual and linguistic variation, revealing shallow understanding and sensitivity to wording.
Similar Papers
When Words Change the Model: Sensitivity of LLMs for Constraint Programming Modelling
Artificial Intelligence
Computers struggle to solve problems with new words.
Robustness is Important: Limitations of LLMs for Data Fitting
Machine Learning (CS)
Computers change answers when you change names.
CP-Bench: Evaluating Large Language Models for Constraint Modelling
Artificial Intelligence
Helps computers solve hard puzzles automatically.