Evaluating Large Language Models for Real-World Engineering Tasks
By: Rene Heesch , Sebastian Eilermann , Alexander Windmann and more
Potential Business Impact:
Tests computers on real engineering problems.
Large Language Models (LLMs) are transformative not only for daily activities but also for engineering tasks. However, current evaluations of LLMs in engineering exhibit two critical shortcomings: (i) the reliance on simplified use cases, often adapted from examination materials where correctness is easily verifiable, and (ii) the use of ad hoc scenarios that insufficiently capture critical engineering competencies. Consequently, the assessment of LLMs on complex, real-world engineering problems remains largely unexplored. This paper addresses this gap by introducing a curated database comprising over 100 questions derived from authentic, production-oriented engineering scenarios, systematically designed to cover core competencies such as product design, prognosis, and diagnosis. Using this dataset, we evaluate four state-of-the-art LLMs, including both cloud-based and locally hosted instances, to systematically investigate their performance on complex engineering tasks. Our results show that LLMs demonstrate strengths in basic temporal and structural reasoning but struggle significantly with abstract reasoning, formal modeling, and context-sensitive engineering logic.
Similar Papers
Unveiling Challenges for LLMs in Enterprise Data Engineering
Databases
Helps computers sort big company data faster.
Understanding the Role of Large Language Models in Software Engineering: Evidence from an Industry Survey
Software Engineering
Helps coders write better programs faster.
LLMs for Engineering: Teaching Models to Design High Powered Rockets
Software Engineering
Makes rockets design themselves better than people.