hdl2v: A Code Translation Dataset for Enhanced LLM Verilog Generation
By: Charles Hong , Brendan Roberts , Huijae An and more
Potential Business Impact:
Helps computers write better code for electronics.
Large language models (LLMs) are playing an increasingly large role in domains such as code generation, including hardware code generation, where Verilog is the key language. However, the amount of publicly available Verilog code pales in comparison to the amount of code available for software languages like Python. In this work, we present hdl2v ("HDL-to-Verilog"), a dataset which seeks to increase the amount of available human-written Verilog data by translating or compiling three other hardware description languages - VHDL, Chisel, and PyMTL3 - to Verilog. Furthermore, we demonstrate the value of hdl2v in enhancing LLM Verilog generation by improving performance of a 32 billion-parameter open-weight model by up to 23% (pass@10) in VerilogEvalV2, without utilizing any data augmentation or knowledge distillation from larger models. We also show hdl2v's ability to boost the performance of a data augmentation-based fine-tuning approach by 63%. Finally, we characterize and analyze our dataset to better understand which characteristics of HDL-to-Verilog datasets can be expanded upon in future work for even better performance.
Similar Papers
Enhancing Large Language Models for Hardware Verification: A Novel SystemVerilog Assertion Dataset
Machine Learning (CS)
Helps computers write code to check computer chips.
Customizing a Large Language Model for VHDL Design of High-Performance Microprocessors
Hardware Architecture
Helps engineers understand computer chip code better.
HLS-Eval: A Benchmark and Framework for Evaluating LLMs on High-Level Synthesis Design Tasks
Hardware Architecture
Helps computers design computer chips faster.