Keywords: LLM Agents, Multi-agent Systems, code agents
Abstract: Recent advances in Large Language Models (LLMs) have demonstrated remarkable capabilities in mathematical reasoning and code generation. However, LLMs still perform poorly in the simulation domain, especially when tasked with generating Simulink models, which are essential in engineering and scientific research. Our preliminary experiments reveal that LLM agents struggle to produce reliable and complete Simulink simulation codes from text-only inputs, likely due to insufficient Simulink-specific data during pre-training. To address this gap, we introduce SimuGen, a multi-modal agentic framework designed to automatically generate accurate Simulink simulation code by leveraging both the visual Simulink diagram image and domain knowledge. SimuGen coordinates several specialized agents—including an Investigator, a Unit Test Reviewer, a Code Generator, an Executor, a Debug Locator, and a Report Writer—supported by a domain-specific database. This collaborative, modular architecture enables interpretable and robust simulation generation for Simulink. Our codes are public available at: https://github.com/renxinxing123/SimuGen_beta
Archival Option: The authors of this submission want it to appear in the archival proceedings.
Submission Number: 46
Loading