Score: 1

VeriGRAG: Enhancing LLM-Based Verilog Code Generation with Structure-Aware Soft Prompts

Published: September 27, 2025 | arXiv ID: 2510.15914v1

By: Jiayu Zhao, Song Chen

Potential Business Impact:

Makes computer code for chips more correct.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

Large language models (LLMs) have demonstrated strong capabilities in generating Verilog code from natural language descriptions. However, Verilog code inherently encodes structural information of hardware circuits. Effectively leveraging this structural information to enhance the functional and syntactic correctness of LLM-generated Verilog code remains a significant challenge. To address this challenge, we propose VeriGRAG , a novel framework that extracts structural graph embeddings from Verilog code using graph neural networks (GNNs). A multimodal retriever then selects the graph embeddings most relevant to the given generation task, which are aligned with the code modality through the VeriFormer module to generate structure-aware soft prompts. Our experiments demonstrate that VeriGRAG substantially improves the correctness of Verilog code generation, achieving state-of-the-art or superior performance across both VerilogEval and RTLLM benchmarks.

Country of Origin
🇨🇳 China

Page Count
9 pages

Category
Computer Science:
Hardware Architecture