Peering Inside the Black Box: Uncovering LLM Errors in Optimization Modelling through Component-Level Evaluation
By: Dania Refai, Moataz Ahmed
Potential Business Impact:
Helps computers write math problems better.
Large language models (LLMs) are increasingly used to convert natural language descriptions into mathematical optimization formulations. Current evaluations often treat formulations as a whole, relying on coarse metrics like solution accuracy or runtime, which obscure structural or numerical errors. In this study, we present a comprehensive, component-level evaluation framework for LLM-generated formulations. Beyond the conventional optimality gap, our framework introduces metrics such as precision and recall for decision variables and constraints, constraint and objective root mean squared error (RMSE), and efficiency indicators based on token usage and latency. We evaluate GPT-5, LLaMA 3.1 Instruct, and DeepSeek Math across optimization problems of varying complexity under six prompting strategies. Results show that GPT-5 consistently outperforms other models, with chain-of-thought, self-consistency, and modular prompting proving most effective. Analysis indicates that solver performance depends primarily on high constraint recall and low constraint RMSE, which together ensure structural correctness and solution reliability. Constraint precision and decision variable metrics play secondary roles, while concise outputs enhance computational efficiency. These findings highlight three principles for NLP-to-optimization modeling: (i) Complete constraint coverage prevents violations, (ii) minimizing constraint RMSE ensures solver-level accuracy, and (iii) concise outputs improve computational efficiency. The proposed framework establishes a foundation for fine-grained, diagnostic evaluation of LLMs in optimization modeling.
Similar Papers
Large Language Model enabled Mathematical Modeling
Computation and Language
Lets computers solve hard problems using normal words.
Mathematical Computation and Reasoning Errors by Large Language Models
Artificial Intelligence
AI learns math better, helps students learn.
Mathematical Computation and Reasoning Errors by Large Language Models
Artificial Intelligence
AI learns math better with step-by-step checks.