Keywords: Large Language Models, Automated Chip Design, Design Verification
TL;DR: We present LLM4DV, an open-source framework to explore the potential of large language models (LLMs) for hardware verification. Evaluating five models across eight designs, our analysis shows that LLMs can lead to efficient test stimuli generation.
Abstract: Hardware design verification (DV) is a process that checks the functional equivalence of a hardware design against its specifications, improving hardware reliability and robustness. A key task in the DV process is the test stimuli generation, which creates a set of conditions or inputs for testing. A major challenge is that existing approaches to test stimuli generation require human effort due to the complexity and specificity of the test conditions required for an arbitrary hardware design. We seek an efficient and automated solution that takes advantage of large language models (LLMs). LLMs have shown promising results for improving hardware design automation, but remain under-explored for hardware DV. In this paper, we propose an open-source benchmarking framework called LLM4DV that efficiently orchestrates LLMs for automated hardware test stimuli generation. Our analysis evaluates five LLMs using six prompting improvements over eight hardware designs and provides insight for future work on LLMs for efficient DV.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8593
Loading