Small, Fast, and Certain: Developing a Specialized Verilog Code Completion Solution for the Enterprise

Published: 30 Oct 2025, Last Modified: 04 Nov 2025MLForSys2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: code completion, Verilog, domain-specific language models, Enterprise AI, confidence-based generation, small language models, fine-tuning
TL;DR: We developed a specialized System Verilog code-completion model for enterprise hardware designers, and we present our process for building compact, domain-tuned models with low latency and reliable, confidence-aware completions
Abstract: We describe the development of a specialized code-completion solution for hardware designers in a large enterprise. It handles their specific flavor of System Verilog, and uses a low-latency on-prem fine-tuned model. We outline the process of developing this solution, from data curation, through several stages of model fine-tuning with different contexts, to evaluation and real-time confidence assessment. We then present our results for fine-tuning a 1B-parameter model on ∼1B tokens of in-domain System Verilog code, achieving high semantic fidelity and low latency for both end-of-line and multi-line completions. Our results demonstrate that small, specialized models can satisfy the latency and privacy requirements of enterprise deployment, offering a viable alternative to general-purpose LLMs in constrained settings.
Submission Number: 15
Loading