Navigating SoC Security Landscape on LLM-Guided Paths

Published: 01 Jan 2024, Last Modified: 04 Jun 2025ACM Great Lakes Symposium on VLSI 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The increasing prominence of Large Language Models (LLMs) is being acknowledged for their exceptional abilities in comprehending natural language, conducting advanced reasoning, and generating contextual responses. LLMs expedite code generation, verification, and bug-fixing tasks across software and hardware domains. Development of hardware designs typically involves translating natural language specifications into Hardware Description Languages (HDLs) like Verilog or SystemVerilog, followed by circuit synthesis, physical layout, and fabrication, with the potential for human errors in the process. In the current industry practice, HDL verification tasks typically rely on manual expertise from security professionals to detect and address vulnerabilities. Modern System-on-Chip (SoC) designs integrate several Intellectual Property (IP) blocks implemented using HDL and communicate through a common bus to perform intended functions. Ensuring security throughout the SoC design process requires innovative solutions due to the complex nature of SoC designs and the distribution of assets across multiple IP blocks. Popular conversation LLMs such as Open AI’s ChatGPT and Google’s GEMINI (formerly BARD) offer the potential to automate HDL code generation and verification tasks by interpreting user prompts represented in natural language descriptions, thereby minimizing manual effort and enhancing hardware design quality. This paper explores recent research works on HDL generation, verification, and bug fix leveraging LLMs while addressing prevailing challenges and presenting potential opportunities for improvement.
Loading