EngChain: A Symbolic Benchmark for Verifiable Multi-Step Reasoning in Engineering
By: Ayesha Gull , Muhammad Usman Safder , Rania Elbadry and more
Potential Business Impact:
Tests if AI can solve hard engineering problems.
Large Language Models (LLMs) are increasingly being applied to specialized, high-stakes domains like engineering, which demands rigorous evaluation of their complex reasoning capabilities. While current benchmarks assess language understanding, factual recall, mathematics or code generation, none capture the integrative reasoning central to engineering where scientific principles, quantitative modeling and practical constraints must converge. To address this gap, we introduce EngChain, a benchmark for verifiable multi-step engineering problem-solving. EngChain contains 90 problems spanning three engineering branches, organized into 9 domains and 20 distinct areas. The problems are generated from symbolic templates with a high degree of randomization to ensure diversity and eliminate the risk of contamination. With this benchmark, we move beyond final answer accuracy with a two-stage evaluation: we first quantitatively verify the numerical and semantic validity of each reasoning step and then introduce LLM-As-A-Judge, an automated system to qualitatively categorize the identified reasoning errors.
Similar Papers
EngiBench: A Benchmark for Evaluating Large Language Models on Engineering Problem Solving
Artificial Intelligence
Tests if computers can solve tricky real-world problems.
FinChain: A Symbolic Benchmark for Verifiable Chain-of-Thought Financial Reasoning
Computation and Language
Teaches computers to do complex money math steps.
ARCHE: A Novel Task to Evaluate LLMs on Latent Reasoning Chain Extraction
Artificial Intelligence
Teaches computers to break down science thinking.