Abstract: Large language models (LLMs) have emerged as transformative tools within the hardware design and verification lifecycle, offering numerous capabilities in accelerating design processes. Recent research has showcased the efficacy of LLMs in translating design specifications into source code through hardware description languages. Researchers are also using LLMs to generate test cases and write assertion rules to bolster the detection of hardware vulnerabilities. Thus, the semiconductor industry is swiftly integrating LLMs into its design workflows. However, this adoption is not without its challenges.While LLMs offer remarkable benefits, they concurrently introduce security concerns that demand a thorough examination. These concerns manifest as potential vulnerabilities indirectly introduced into the designs while generating the design code, or by directly equipping the attackers with novel avenues for exploitation. In this paper, we discuss the emerging security implications due to the capabilities introduced by LLMs in the context of hardware design verification, evaluate the capabilities of existing security detection and mitigation techniques, and highlight the possible future security attacks that use LLMs.
Loading