Score: 0

David vs. Goliath: Can Small Models Win Big with Agentic AI in Hardware Design?

Published: December 4, 2025 | arXiv ID: 2512.05073v1

By: Shashwat Shankar , Subhranshu Pandey , Innocent Dengkhw Mochahari and more

Potential Business Impact:

Makes AI design chips faster and cheaper.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Language Model(LLM) inference demands massive compute and energy, making domain-specific tasks expensive and unsustainable. As foundation models keep scaling, we ask: Is bigger always better for hardware design? Our work tests this by evaluating Small Language Models coupled with a curated agentic AI framework on NVIDIA's Comprehensive Verilog Design Problems(CVDP) benchmark. Results show that agentic workflows: through task decomposition, iterative feedback, and correction - not only unlock near-LLM performance at a fraction of the cost but also create learning opportunities for agents, paving the way for efficient, adaptive solutions in complex design tasks.

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)