VeriGRAG: Enhancing LLM-Based Verilog Code Generation with Structure-Aware Soft Prompts
By: Jiayu Zhao, Song Chen
Potential Business Impact:
Makes computer code for chips more correct.
Large language models (LLMs) have demonstrated strong capabilities in generating Verilog code from natural language descriptions. However, Verilog code inherently encodes structural information of hardware circuits. Effectively leveraging this structural information to enhance the functional and syntactic correctness of LLM-generated Verilog code remains a significant challenge. To address this challenge, we propose VeriGRAG , a novel framework that extracts structural graph embeddings from Verilog code using graph neural networks (GNNs). A multimodal retriever then selects the graph embeddings most relevant to the given generation task, which are aligned with the code modality through the VeriFormer module to generate structure-aware soft prompts. Our experiments demonstrate that VeriGRAG substantially improves the correctness of Verilog code generation, achieving state-of-the-art or superior performance across both VerilogEval and RTLLM benchmarks.
Similar Papers
Bridging Code Graphs and Large Language Models for Better Code Understanding
Computation and Language
Helps computers understand computer code better.
Zero-shot Graph Reasoning via Retrieval Augmented Framework with LLMs
Artificial Intelligence
Helps computers answer questions about complex connections.
Large Language Model for Verilog Code Generation: Literature Review and the Road Ahead
Hardware Architecture
AI writes computer chip instructions automatically.