Keywords: NLP: Prompt Engineering / Prompting, NLP: (Large) Language Models, NLP: Generation
Abstract: Large language models (LLMs) show promise in code generation yet struggle with Hardware Description Languages (HDLs) such as Verilog, where models often get stuck in flawed reasoning paths that block accurate synthesis and bug fixing. We introduce Stochastic Prompt Assistance (SPA), a lightweight methodology that leverages LLM prompt sensitivity by injecting controlled, non-semantic perturbations into prompts. This approach encourages exploration of diverse reasoning paths without requiring additional feedback loops. Implemented in the \texttt{SPARC-Debugger}, our automated framework pairs SPA with hierarchical error-pattern matching and achieves consistent gains on Verilog code-generation and debugging benchmarks, including critical “0-to-1” unlocks where baseline models fail entirely. SPA thus provides a complementary and orthogonal mechanism to existing decoding and refinement strategies, improving the reliability of LLMs in HDLs and offering a principled path for broader application in high-precision formal reasoning domains.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 14480
Loading