Evaluating Large Language Models for Real-World Engineering Tasks

René Heesch, Sebastian Eilermann, Alexander Windmann, Alexander Diedrich, Oliver Niggemann

Published: 01 Jan 2026, Last Modified: 26 Jan 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: Large Language Models (LLMs) are transformative not only for daily activities but also for engineering tasks. However, current evaluations of LLMs in engineering exhibit two critical shortcomings: (i) the reliance on simplified use cases, often adapted from examination materials where correctness is easily verifiable, and (ii) the use of ad hoc scenarios that insufficiently capture critical engineering competencies. Consequently, the assessment of LLMs on complex, real-world engineering problems remains largely unexplored. This paper addresses this gap by introducing a curated database comprising 78 questions derived from authentic, production-oriented engineering scenarios, systematically designed to cover core competencies such as prognosis, and diagnosis. Using this dataset, we evaluate four state-of-the-art LLMs, including both cloud-based and locally hosted instances, to systematically investigate their performance on complex engineering tasks. Our results show that LLMs demonstrate strengths in basic temporal and structural reasoning but struggle significantly with abstract reasoning, formal modeling, and context-sensitive engineering logic.
Loading