Large Language Model-Based Automatic Formulation for Stochastic Optimization Models
By: Amirreza Talebi
Potential Business Impact:
AI writes math problems from words.
This paper presents the first integrated systematic study on the performance of large language models (LLMs), specifically ChatGPT, to automatically formulate and solve stochastic optimiza- tion problems from natural language descriptions. Focusing on three key categories, joint chance- constrained models, individual chance-constrained models, and two-stage stochastic linear programs (SLP-2), we design several prompts that guide ChatGPT through structured tasks using chain-of- thought and modular reasoning. We introduce a novel soft scoring metric that evaluates the struc- tural quality and partial correctness of generated models, addressing the limitations of canonical and execution-based accuracy. Across a diverse set of stochastic problems, GPT-4-Turbo outperforms other models in partial score, variable matching, and objective accuracy, with cot_s_instructions and agentic emerging as the most effective prompting strategies. Our findings reveal that with well-engineered prompts and multi-agent collaboration, LLMs can facilitate specially stochastic formulations, paving the way for intelligent, language-driven modeling pipelines in stochastic opti- mization.
Similar Papers
Large Language Models for Education and Research: An Empirical and User Survey-based Analysis
Artificial Intelligence
Helps students and researchers learn and solve problems.
ChatGPT or A Silent Everywhere Helper: A Survey of Large Language Models
Computation and Language
Lets computers talk and write like people.
The Lazy Student's Dream: ChatGPT Passing an Engineering Course on Its Own
Computers and Society
AI can pass a college engineering class.