An Agent-Based Framework for the Automatic Validation of Mathematical Optimization Models
By: Alexander Zadorojniy, Segev Wasserkrug, Eitan Farchi
Potential Business Impact:
Checks if computer math problems are solved right.
Recently, using Large Language Models (LLMs) to generate optimization models from natural language descriptions has became increasingly popular. However, a major open question is how to validate that the generated models are correct and satisfy the requirements defined in the natural language description. In this work, we propose a novel agent-based method for automatic validation of optimization models that builds upon and extends methods from software testing to address optimization modeling . This method consists of several agents that initially generate a problem-level testing API, then generate tests utilizing this API, and, lastly, generate mutations specific to the optimization model (a well-known software testing technique assessing the fault detection power of the test suite). In this work, we detail this validation framework and show, through experiments, the high quality of validation provided by this agent ensemble in terms of the well-known software testing measure called mutation coverage.
Similar Papers
From Natural Language to Solver-Ready Power System Optimization: An LLM-Assisted, Validation-in-the-Loop Framework
Artificial Intelligence
AI helps plan power grids better and faster.
Automated Design Optimization via Strategic Search with Large Language Models
Machine Learning (CS)
Helps computers design better code faster and cheaper.
Toward a Trustworthy Optimization Modeling Agent via Verifiable Synthetic Data Generation
Artificial Intelligence
Teaches computers to solve math problems perfectly.