Can Reasoning Models Reason about Hardware? An Agentic HLS Perspective
By: Luca Collini , Andrew Hennessee , Ramesh Karri and more
Potential Business Impact:
Helps computers design computer chips faster.
Recent Large Language Models (LLMs) such as OpenAI o3-mini and DeepSeek-R1 use enhanced reasoning through Chain-of-Thought (CoT). Their potential in hardware design, which relies on expert-driven iterative optimization, remains unexplored. This paper investigates whether reasoning LLMs can address challenges in High-Level Synthesis (HLS) design space exploration and optimization. During HLS, engineers manually define pragmas/directives to balance performance and resource constraints. We propose an LLM-based optimization agentic framework that automatically restructures code, inserts pragmas, and identifies optimal design points via feedback from HLs tools and access to integer-linear programming (ILP) solvers. Experiments compare reasoning models against conventional LLMs on benchmarks using success rate, efficiency, and design quality (area/latency) metrics, and provide the first-ever glimpse into the CoTs produced by a powerful open-source reasoning model like DeepSeek-R1.
Similar Papers
Have Large Language Models Learned to Reason? A Characterization via 3-SAT Phase Transition
Artificial Intelligence
Helps computers truly think, not just guess.
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Computation and Language
Makes smart computer programs think faster, not waste words.
Exploring Chain-of-Thought Reasoning for Steerable Pluralistic Alignment
Computation and Language
Lets AI understand different opinions and viewpoints.