NotSoTiny: A Large, Living Benchmark for RTL Code Generation
By: Razine Moundir Ghorab , Emanuele Parisi , Cristian Gutierrez and more
Potential Business Impact:
Helps computers design computer chips better.
LLMs have shown early promise in generating RTL code, yet evaluating their capabilities in realistic setups remains a challenge. So far, RTL benchmarks have been limited in scale, skewed toward trivial designs, offering minimal verification rigor, and remaining vulnerable to data contamination. To overcome these limitations and to push the field forward, this paper introduces NotSoTiny, a benchmark that assesses LLM on the generation of structurally rich and context-aware RTL. Built from hundreds of actual hardware designs produced by the Tiny Tapeout community, our automated pipeline removes duplicates, verifies correctness and periodically incorporates new designs to mitigate contamination, matching Tiny Tapeout release schedule. Evaluation results show that NotSoTiny tasks are more challenging than prior benchmarks, emphasizing its effectiveness in overcoming current limitations of LLMs applied to hardware design, and in guiding the improvement of such promising technology.
Similar Papers
Assessing Large Language Models in Generating RTL Design Specifications
Hardware Architecture
Helps computers understand computer chip plans automatically.
TuRTLe: A Unified Evaluation of LLMs for RTL Generation
Hardware Architecture
Tests AI for making computer chips faster.
RealBench: Benchmarking Verilog Generation Models with Real-World IP Designs
Machine Learning (CS)
Helps computers write complex computer parts.